Hacker Newsnew | past | comments | ask | show | jobs | submit | nikcub's commentslogin

* real programmers write assembly, not FORTRAN

* real programmers manage memory, it's a craft

* real programmers don't drag and drop

* real programmers don't use intellisense

* real programmers don't need stack overflow

* real programmers don't tab-complete

* real programmers don't need copilot

* real programmers don't use llms <- you are here


That's also not what he is saying. I don't see how that is what everyone is taking from this.

> If a tree falls in the forest...

I hope this is a pun on the content management system used to publish OP. It's forester[0], written in OCaml and parses TeX-like .tree files into semantic XML which uses browser XSLT to render the HTML.

View source on the page to get an idea.

Reminder of what the idealised web promise from decades ago was. Long gone. Very apt.

[0] https://www.forester-notes.org/index/index.xml


the most cited is terminal bench 2.0, but its also plagued by cheating accusations and benchmaxxing.

somewhat remarkably, claude code ranks last for Opus 4.6 - which may say something about cc, or say something about the benchmark

[0] https://www.tbench.ai/leaderboard/terminal-bench/2.0


Stunning results at the top of the field. Some interesting takeaways on both fuelling and shoes.

Maurten spent months working with Sawe and other runners getting their gut capacity trained so they could absorb and burn 100 carbs per hour[0][1]

> The Maurten research team was embedded with Sawe’s team in Kenya for 32 days across six trips between last and this April. They were training his gut to absorb that load by mimicking race-day protocol in training. The hydrogel technology they have developed over the past 10 years now allows athletes to absorb 90–120 grams of carbs per hour without GI distress.

Second is the shoes. Adidas Adizero weigh 96 grams[2] with new foam tech and new carbon plates

Nike and INEOS spent millions over years to get Kipchoge to a sub-2 in artificial conditions, and now the elite end of the field are knocking that barrier out in race conditions. Unreal.

Running tech and training have been revolutionized in the past few years.

[0] https://marathonhandbook.com/sebastian-sawe-arrives-in-londo...

[1] https://www.instagram.com/p/DXmvAUvkWaq/

[2] https://www.runnersworld.com/uk/gear/shoes/a71129333/sabasti...

edit: correct :s/calories/carbs thanks


> could absorb and burn 100 calories per hour

burning a hundred calories an hour is trivial. Most people will burn 100 calories per mile when walking or running, and more if moving as fast as these athletes, and many, many humans can do this for far, far longer than 2 hours.

It's the absorbtion that's the challenge. Maurten is not somehow alone in the particular stuff they've developed - ultra runners are generally shifting up into the 90-120 gram/hr range (or beyond!), using a variety of different companies' products. The gut training protocols for this are widely discussed in the world of running for almost any distance above a half marathon.


> burning a hundred calories

GP left out the units but is clearly talking about grams ("absorb ... 100 carbs per hour"), not calories (no one needs training to absorb 25g/hr). Carbs are 4 kcal/g. 100g of carb (400 kcal) an hour isn't replacement level for even casual athletic efforts, but it does mitigate the loss of glycogen in muscle somewhat.


Exogenous carbohydrate doesn't spare muscle glycogen, only liver glycogen.

I've read that even if you absorb it all, there's some question about whether it's useful. This Alex Hutchinson article suggests, among other things, that it may spare your fat stores rather than your muscle glycogen:

> Even if you can absorb 120 grams per hour, it might not make you faster. In Podlogar’s study, cyclists burned more exogenous carbs when they consumed 120 rather than 90 grams per hour, but that didn’t reduce their rate of endogenous carb-burning—that is, they were still depleting the glycogen stores in their muscles just as quickly.

https://www.outsideonline.com/health/training-performance/en...

https://archive.ph/Vpk0h

https://pmc.ncbi.nlm.nih.gov/articles/PMC9560939/


That may still be worthwhile if fat is harder to recruit than exogenous carbs.

What does 'harder to recruit' mean though?

Kejelcha is 6'1" and under 130lb.

What fat stores?


It doesn't take much. If an elite burns 1500-2000 kcal running a marathon, even ignoring glycogen and exogenous carb, that's only ~195-260g of body fat (~7.7 kcal/g). Even at an extremely lean 4% body fat, Kejelcha would have 2360g of body fat available. (He's probably in the slightly higher 5-10% range.)

(And obviously, the majority of those 1500-2000 kcal are coming from stored glycogen rather than fat.)

If we're only talking about the marginal difference between 90 and 120 g/hr of exogenous carb, then that's 60g over two hours or 240 kcal -- equating to 31g of stored body fat. That's nothing.


The last few years, cycling and triathlon have been experimenting with upto 120g carbs intake per hour. Last year, Cameron Wurf ate 200g carbs per hour when he broke the world record for fastest bike split ever in a triathlon (which was broken again a few months later).

a 2025 look at elite triathletes fueling at https://www.triathlete.com/nutrition/race-fueling/ironman-wo... shows that norwegian athletes are ingesting higher amounts of carbs (~180g/hr bike, ~120g/hr run - 2 males, ~150g/hr both run & bike - 1 female) especially for the bike portion.

Where does discussion on gut training occur? All I know is you need a 5:4 ratio of glucose to fructose? Then when you train, you use the gels and the more you do it, the more capable your gut gets at absorbing without distress.

Is that all the science to it?


AFAIK 5:4 is just the lowest ratio they've tested. Personally I use table sugar (1:1) and can sustain rates above 100g/h. Haven't hit the ceiling yet, don't really feel the need to explore where that is yet because exceeding the absorption rate comes with the risk of diarrhoea which is bad at any time but especially when you're in the middle of a training session and who knows where the nearest toilet is.

Gut training is consuming large amounts of carbohydrate (preferably in the same form you intend to use when racing), yes.


Eating the same amount of table sugar or of a commercial gel should have pretty much the same effect on performance.

However, for many people eating so much of a very sweet food becomes very unpleasant.

It is very easy and cheap to make at home a gel by boiling in water corn starch mixed with fructose in a microwave oven, for a few minutes. Swallowing such a gel should feel much less sweet than the same amount of a sugar solution.

As far as I know, the only difference between such a gel made at home and the commercial gels for athletes is that in the latter the starch is pre-digested with some bacterial enzyme, so that the long starch molecules are broken into short molecules of dextrine and maltose.

This processing shortens the time until the absorption in the gut, but I am not sure if this is really an advantage in all cases. A slower absorption will maintain an elevated blood glucose level for a longer time after ingestion, which may be preferable if you feed periodically, because it avoids wide fluctuations in the glucose level, while a faster absorption might be useful for an immediate recovery when the glucose level has been severely depleted by not feeding for a long time.


Yes but the science is actually achieving that and finding the limits. It used to be thought that 60g carbs/hour was the limit, then 100g, now it’s thought to be 120g.

It’s also about the methods of achieving that under stress without spewing it all back up. Ironman athletes would stuff their faces on the bike under the assumption that this volume of carb absorption wasn’t possible while running.

Some of the challenge in research will come from competitors not wanting to publish results to maintain an edge. It is mitigated by the visual of the race by (you can see athletes pounding carbs), as well as the nutrition companies wanting to sell more product. This will cause them to publish some information to convince us amateurs to quadruple our purchase volume ;-)


Wow so he was absorbing 400 calories per hour with this gel, but he was likely burning 3-4x that amount (or even more) while running 13.1 miles per hour!

In a two hour race that’s still 800 bonus calories, that’s something.

The race to tolerate lots of carbs is usually something you think of in 8 hour Ironmans. The good part is you can do most of it on the bike, which is much easier to eat as you go. As far as I know, many elite runners were doing like 50% water, 50% sports drink and consuming way under 100g.


> As far as I know, many elite runners were doing like 50% water, 50% sports drink and consuming way under 100g.

This used to be true, and is still true for many athletes up the marathon distance. Above that, however, the momentum has swung heavily to very high carb intake. Most (though not all) of the world's best ultra runners (we're talking 7:00 min/mile pace through mountainous terrain) are picking this up, with many getting to and beyond 100g/hr of carb consumption.


Your body stores roughly 2000 calories in glycogen. They are burning calories but nowhere near the amount a middle pack would be at this pace.

So ~2800 calories of carbs with some fat being burned.


Is there anything here a people who should be dieting could learn here? I’ve found when running, every 3-4km if I do t have sugars/gatorade my blood sugar gets so low I end up almost confused… running suburban streets is tough because I’ve got to cross the road when I’m midly delirious!

How far are you running total, both per run and per week?

Running will absolutely help your health, but on its own it's unlikely to get you thin. It's hard to burn enough to make a big difference without it chewing your body up in other ways - especially if you're overweight and out of condition to begin with, and so a bit more susceptible to injury than skinny runner types.

Thinking of it as a calories in/out equation is counterproductive for most people, if it boosts your cardio health, gets you more active and maybe converts a bit of body fat to leg muscle, that on its own is a win.

Certainly no harm in having a swig of Gatorade every couple of km if it helps you go further, anyhow.


Thanks for your comment!

Unfortunately time hasn’t been on my side for a while now, so my daily running has slipped back to no running in the past year!

I’ll get back at it, but yes… will need to improved diet too


I guess it's the classic case of one not being able to outrun a bad diet.

If fueling during the activity stops you from overeating afterwards and possibly allows you to exercise a bit longer it is worth it, even though it seems counter productive.


> not being able to outrun a bad diet

Dang… I was hoping for some cheat level wisdom :)


What are the long term effects of a lot of processed carbs on the gut?

Race day super shoes certainly help a lot but another difference is that super shoes allow them to train a lot more. Running training is limited by tendons. This is the reason even elite runners often train only 9-11 hours a week while many dedicated amateurs can easily spend 20+ hours per week cycling. This is also the main reason runners "double" that is they run 2 times a day. The body absorbs 2x45 minute session much better than one 1x90 minutes session.

Super shoes are changing the game here allowing for more volume for months without injuries. When you look at Sawe's training his volume is insane. His easy/endurance days are 20km in the morning and 10km in the evening. This is some 100-110 minutes of running on "easy" days. His total time on feet must be around 14-15 hours per week - approaching cycling volume territory (especially when you consider that cyclists do significant % of their volume cruising/descending without putting almost any power at all which inflates the time).


One gram of carbs is 4 calories., so more like 400 calories per hour.

It was confusing when the running industry switched from calories to grams of carbs, but that's all anyone talks about now.


Because calories simply do not matter. At high intensities of working out, it's the amount of carbohydrates you can consume that allow more fuel to be burnt.

"In the aerobic exercise domain up to ~100% of maximal oxygen uptake (VO2max), CHO is the dominant fuel, as CHO-based oxidative metabolism can be activated quickly, provide all of the fuel at high aerobic power outputs (> 85-90% VO2max) and is a more efficient fuel (kcal/L O2 used) when compared to fat."

https://www.gssiweb.org/sports-science-exchange/article/regu...


Calories do matter (obviously, as energy intake is the entire point) but as you note the specific form that the fuel takes matters. However "carbs" is a catch all that includes plenty of things that (I assume) would be of similarly minimal use in this scenario. The calories need to take a very specific chemical form for this to work.

The wording is certainly confusing here, but yes the calories don’t matter as much as the form. Eating protein and fats simply give you minimal useful calories during the race. Even most carbs won’t be useful if they are more complex.

Then why replace one imprecise term with another? Fiber is a carbohydrate. Humans use close to nothing from its energy. (Though it plays another important role in the digesive system.)

Try eating 100g of grass per hour during a marathon and you will see. That's the metabolic edge horses have over humans.


Horses don't eat during races (and aren't evolutionarily disposed to marathons, anyway). No edge there; it takes quite a while for their symbiotic gut flora to downconvert fodder to glucose.

My gut flora won't do that, so they do have an edge. (Not during a marathon, but that wasn't my point anyway.)

They're equivalent modulo some multiple. It doesn't matter which one we talk about, as long as we're consistent.

It’s also confusing that most nutritional labels say “calories” (Cal) when they really mean kilocalories (kcal). And those are different from regular (‘small’) calories (a measure of energy needed to heat 1g water 1c).

1 food calorie as listed on a food label is enough to heat 1kg of water by 1c


This was the explanation for why the scotch and soda diet doesn't work:

https://www.futilitycloset.com/2008/11/16/the-mensa-diet/

(If the nutritional calories in the drink had been only the same number of thermodynamic calories, the drink would have been energetically negative for the body because of its low temperature.)


Yeah, I assume this dumbification is spearheaded by the Americans.

It's deliberate, because you generally do not want calories from fat or protein during a marathon or other running race.

> The Maurten research team was embedded with Sawe’s team in Kenya for 32 days across six trips between last and this April. They were training his gut to absorb that load by mimicking race-day protocol in training. The hydrogel technology they have developed over the past 10 years now allows athletes to absorb 90–120 grams of carbs per hour without GI distress.

That common knowledge, nothing revolutionary here.

There are 2 types of sugar, fructose and glucose, you can max out on glucose around 60g/hour and train you guts to max out also on fucose.

Personally I reached 90g/hour without training, no diarrhea or vomiting.

And you know the best ? White sugar in everyone kitchen is almost perfectly 50% glucose, 50% fructose.

You don't need 'advanced' gel to do that, a bottle of water with 120g of white sugar an hour.

And the shoes, yeah they're light but guess what. Other competitors also have sponsors and excellent shoes, some even run bare feet and yet they don't go faster.

No the real reason why he is able to run so fast is first excellent genetic, that's the common base.

Secondly, excellent training, coaching.

Third, his steroid/peds program is on point and his body is responding well to it.

Typically for endurance runner you want profiles with low natural hematocrit so you can max out on the EPO, but there are also other considerations. For instance, are his tendons responding well to GH and other peptides ?


> That common knowledge, nothing revolutionary here.

I've never read about that. So it's not "common knowledge" - except maybe in the running community.

I like your comment for putting some facts into place (how far you can go with common options). But as I never heard of this before, I have no idea how common it actually is and the effects and the science around it, what research does say to this, how and why this is used in other sports - or why not.


> You don't need 'advanced' gel to do that, a bottle of water with 120g of white sugar an hour.

Did you carry all of these bottles on a marathon? Did you have to stop to get them out of your bag? How did you find drinking whilst running?

I find gels much more compact and for the amount of time I need to run one - over 4 hours there's a lot of weight I need to carry. I can store a lot of them up front in my running vest and keep going.


> I find gels much more compact and for the amount of time I need to run one - over 4 hours there's a lot of weight I need to carry. I can store a lot of them up front in my running vest and keep going.

I believe he had people from his staff handing him gels and water at designated point in his race. He wouldn't have to carry it all.

And I don't get why my solution of water + sugar is so bad. You're still going to drink water, you can't just eat gels during the whole race.

You might as well add sugar to the water that you are already going to drink anyway, this way you don't need to carry the gel.


> Third, his steroid/peds program is on point and his body is responding well to it.

Do you have any evidence of this?


There are two kinds of athletes that win global track events: - athletes from areas with bad doping enforcement (remote places in the mountains of Kenya, Jamaica) - athletes from countries with tons of surplus biomedical expertise (USA and other western countries)

His comment is more of a general commentary that east African countries are notorious for doping.

Like, if we find out the top two finishers here doped very few would be surprised.

That said - it's still an amazing accomplishment.


I’d be surprised, given how outspoken Sawe is about doping. He invited the AIU to test him before Berlin and Adidas also paid.

> Determined to prove he is competing clean, Adidas provided $50,000 (£36,900) to the Athletics Integrity Unit, the sport's anti-doping body, to frequently test Sawe over a 12-month period.

> That began with a reported 25 out-of-competition tests in the lead-up to Berlin in September, continuing at a similar rate as he prepared for London.

> Sawe said on Monday: "It's very important to me because it gets out the doubt in my career of athletics and yesterday's performance.

> "It shows Sabastian Sawe is clean. It shows running clean is good, and we can run clean and we can run faster.


Armstrong never failed a blood test and he was tested without being warned before hundreds of times a year.

This proves nothing, absolutely nothing at all. It's just a PR move by adidas and actually it seems to be working on you.


I find con men are the first to protest their innocence, and even suggest well-curated "proofs" of the same.

It's not evidence either way, in the arms-race of high-tech doping.


It pains me that running, track, and cycling are "notorious" for doping, but the major sports don't test at any practical level compared to the "dirty" sports.

Any sport, at high levels, definitely involve performance enhancing drugs.

Even ping pong, Adderall can be a tremendous help.

It's just that the other sports don't test as much, they are not worst or better.


I have no evidence of this at all. I did thing it was interesting the demeanor of Sebastian Sawe and second place finisher Yomif Kejelcha, both of whom finished under 2 hours.

If you watch Kelvin Kiptum break the world record at the 2023 Chicago Marathon and Eluid Kipchoge break the world record in the 2022 Berlin Marathon, you see the joy and exasperation of their achievement.

That joy was missing in the winners of the London marathon. It's not evidence, but it's an interesting data point. Another data point: Not only did the first two finishers break two hours, the third place finisher, Jacob Kiplimo, broke the world record.


People do have different emotional response to big achievements.

Still this seemingly detached "matter of a business" look by the winners was intriguing.

They are all high level athletes not Rosie Ruiz, but hopefully this record is clean.


Interesting, I got the exact same feeling when I went to watch tour de France in mont ventoux.

One of the hardest place to climb and that was after hours of cycling, and weeks of racing.

Despite that, pogacar and his main competitor they looked so fresh. Like when you go do a country stroll. The others looked exhausted, some having saliva dipping of their mouth, but these 2 and maybe a few others had absolutely no sign of being tired.

He would later win the race, the tour de France, and many have serious doubt on him.

He regularly explodes climbing record held during EPO area by confirmed cheaters.

Personally that day I got confirmation that something ain't right.


We never had any for Lance Armstrong until he confessed.

I'm not the expert on the bio but the gel has the advantage of being consumable while running. Try drinking while running. Even at a slower pace it's hard not to spill. If you want the dosage correct you can't spill.

Well I assumed that during marathon you have places where you stop and drink. Usually there are even volunteers giving out water, you can have someone from your staff giving you a bottle of 1l of water-sugar.

But maybe I'm mistaken and they dont do that for marathon.


Most people do, yes but in this case you don't want to stop on a sub 2 is the point.

You certainly do not need an advanced gel, but sugar solution is not the only alternative.

You can make easily and cheaply at home a gel by boiling in water a mixture of corn starch with fructose. The best way is to do this for a few minutes in a microwave oven, because it is fast and reproducible.

This has the advantage that it does not feel so sweet as a sugar solution, which becomes unpleasant in too great quantities.

I am not aware of any other difference in the commercial gels, except that those use pre-digested starch, whose long molecules are broken in shorter molecules, for a decreased time until absorption. I doubt that this short absorption time is always an advantage, because it leads to wider fluctuations in the blood glucose level during periodic feeding.


Pushing up to and over 100 is the challenge. If i remember right 90 is 60-30 (gluc and fruc) and the upper limit after that GI distress.

You a cyclist or have you been doing that from running?

From homemade concoctions… you can use maltodextrin for pure glucose.


Yeah 100 to 120 is where you start to need to get your gut used to.

I don't remember the exact ratio too, but just saying that it's nothing new and it doesn't explain in itself SAWE performance. Nor the light shoe.


I agree with you. I used maurten 160, 320, and the gels years ago.

Now I just throw honey into water on my runs.

It doesn't upset me even though maurten does feel a little better, its worth saving tons of money over buying maurten


> The hydrogel technology they have developed

At what point is this just a performance enhancing “drug” — what makes something a drug?


Some say that protein or creatin is a drug.

Some say it's testosterone, but we already got natural testosterone in our body. Just like creatin.

Maybe a drug is at the point where you put in your body something that it doesn't produce naturally. Like nandrolone, clenbuterol.

Honestly I thought I had a clear view too but the more I think about it the more I realize that we are a bit hypocrite.


I normally consume 90g of carbs per hour when long distance biking, so do a few other riders I know. No GI issues. I use Skratch some other guys like Precision.

Yeah, I just literally use table sugar, which is 1:1 glucose:fructose. Maurten et al using 1:0.8, close enough! And I don't believe the hydrogel thing is any magic, just marketing.

But yeah, this is a thing. There is some gut distress for sure at higher levels of intake. See guy finishing second -- still under 2 hrs! immediately puking, which is fairly common at the high intakes. I've heard of Blumenfeld (the triathlete) taking like 200g/hr or more. Insane. Though he's had some epic GI disasters too, lol.


The hydrogel textures (not maurten but naak, but close enough), for me, allow while racing to swallow a full 40g gel in half a second without feeling the sugary taste a lot, which is nice. Compared to thick syrup-like gels, it’s a way better experience in a marathon.

But I only buy for actual races, rest of the time, I do my own 1:0.8 mix with a bit of thickener, in soft flasks. Much more cost effective.


it is a lot more challenging when running than when biking. The jostling is not your friend.

It's much easier when cycling and there is much more freedom with your breakfast choice and timing. You are stable on the bike. When running there are constant vibrations and up and down movement that can easily upset your stomach/intestines.

Pro cycling has been on the high fueling strategy for a while, with huge results for record times. Its a game changer for endurance sports.

Can someone explain this in more details? Like will you run out of energy, your result will suffer drastically, anything else?

The reason I am asking - I hike a lot, and for shorter hikes (<35km) I don't even bother with food. Just last Saturday I did 28km hike with 550m elevation gain - last meal I had was 5pm on Friday. No breakfast. No problem. I walk at a brisk (for layman) pace, ~7±2 km/h. Am I missing something by not caring about food there, or for my level of "performance" it does not matter anyway? The original question still stands.


Your muscles need energy to work. You have a variety of energy stores in your body, which range from small amounts of quickly available energy (ATP) to large amounts of slowly available energy (fat). Most relevant to this discussion is glycogen, which is is a carbohydrate. You have about 500g in your body, which is about 2000kcal. It is more readily accessible than fat, and 2000kcal is enough for an hour, or maybe two, of high intensity exercise.

These gels and drinks are trying to replenish glycogen stores. The idea is to keep the runner using glycogen for the entire race, as it provides more energy per unit time than fat metabolism.

In your hikes your energy demands probably aren't exceeding the rate that your fat metabolism can provide.


I'm not an expert, so do some research, but it's probably a bit of a) you've trained yourself to burn fat more (a good thing) b) you're not exercising as strenously c) yes, if you ate you'd probably "perform better".

I'd recommend you to do your own research though.

But to add - yes, if you don't eat you will "bonk" on a long bike ride.


I think its important to state that "bonk point" is different based on if your metabolism is adapted for carb vs fat burning.

If always training with high carb loads, you will bonk rather quickly going on a long ride without any compared to if you train fasted or more fat adapted.


It won't matter for your level of performance. A hike isn't a race. I'm assuming you don't care about finishing the hike faster and simply being out in the wilderness at all is the main point, the longer the better as long as you still get home by the time you have something else you need to do there.

Aside from that, this isn't a well-understood topic. Even extremely lean humans have more than enough fat to fuel crossing North American without technically neeeding to eat anything. Fat is less efficient to mobilize than glycogen, and glycogen is less efficient than free glucose already in the bloodstream, so there's some tiny bit of gain there possibly, but frankly, as fast as these guys are going, they're still far enough below lactate threshold that the energy efficiency gain is unlikely to be any real contributor to a faster finishing time. Beyond that, you can't deplete glycogen just from running a marathon, unless you were already depleted at the start of the race. You definitely can and will in longer races, but not a marathon.

Instead, what I'm pretty sure is happening here is you're just hacking the way fatigue works. Your body has more than enough fuel to do insane amounts of work without eating, but bodily systems for doing work are regulated by predictive models. There is no literal fuel gauge in your brain or muscles. They're guessing how much you have left and those guesses are conservative because selective pressures of all animals in the past pretty strongly leaned into leaving something in reserve just in case. If you suddenly find yourself in a struggle to the death, you need to be able to give it your absolute all, and that will require mostly if not entirely glycogen, which means your body's natural regulatory systems will work very hard to keep you from depleting it, so even if you have plenty left, if it's going quickly, you'll get more tired and sore than it was being depleted less quickly. Your autonomic systems don't know you're going to stop at 26.2 miles and eat a giant feast 20 minutes later and the chance of a tiger randomly popping up on the course is effectively 0. All it sees is you're depleting energy stores far more quickly than you can replenish them over the long term and it tries very hard to keep you from doing that.

In a sense, this is all endurance sport really is, training away and hacking away your body's inhibitory mechanisms to get as close as you possibly can to expressing your true physiological limits, but nobody will ever truly get there.


In terms of getting to higher caloric loading due to gut system restraints. It sounds like running has finally caught on to what professional cycling has been doing over the last couple years. Bodies ability to handle high caloric loading is the rate limiting step.

The leaders were burning a lot more than 100kcal per hour. I think you mean 100g of carbohydrates per hour.

Not burning, eating. They are eating 100g of carb per hour. Burning 1000+ calories.

Correction: 100g of carbohydrate/hr. That's approximately 400 calories/hr.

> Maurten spent months working with Sawe and other runners getting their gut capacity trained so they could absorb and burn 100 carbs per hour[0][1]

In trail running especially it's not uncommon to exceed the recommendation of 1g/Kg bodyweight/hour, up to 120g of carbs per hour, for those that can take it.


Do we know of any adverse effects on such long term consumption of that amount of simplest carbs? While good source of immediate energy, simple carbs are basically a slow acting poison to various internal organs and over time bring stuff like diabetes.

Its great they don't sit idly around in the body and get transformed into fat but rather they are burned in muscles, but still flooding body again and again with this may have long term negative effects that far outweigh any health gains gained from doing these sports, even at such intensity.

Definitely not a diet one could recommend for regular sporty guys, unless they are uber-competitive freaks who have to win at all costs.


> simple carbs are basically a slow acting poison to various internal organs and over time bring stuff like diabetes.

Do you have any evidence for this? The problem with simple carbs (if you don’t already have insulin issues) is that they’re easy to digest and provide minimal satiety so you end up consuming significant calories.

But as far as I’m aware there is no evidence that they’re worse for you than the rapid calorie addition.


These athletes are eating nutritionally complete diets overall. The simple carb intake is a strategy just for the run itself.

That said, pretty much everything about highest-end athletics is net negative for long-term health. It’s incredibly hard on the body to run a marathon in general, let alone at record breaking pace.


I don’t think there is any elite level sport that doesn’t trade long term health for performance in competitions.


The Adidas Adios Pro Evo 3 - https://news.adidas.com/running/adidas-unveils-its-first-sub...

  adidas introduces the Adizero Adios Pro Evo 3 – the lightest and fastest Adizero shoe ever, weighing an average 97* grams.

  The race-day shoe represents the culmination of three years of cutting-edge research. It is 30% lighter, delivers 11% greater forefoot energy return, and improves running economy by 1.6% compared to its predecessor - making it a record breaker before it’s even laced up.

  The shoe will launch with a highly limited release, with ambitious runners able to sign up for the chance to get their hands on a pair from April 23. This will be followed by a wider release in the fall marathon season. The Adizero adios Pro Evo 3 will cost $500/€500.

For other marathon racing shoes, Google says:

  The Nike Alphafly 3 is the lightest in the series, weighing approximately 7.0–7.7 oz (198–218g) for a men's size 9, and 6.1 oz (174g) for women's sizes.


  The PUMA Deviate NITRO™ Elite 3 is exceptionally lightweight, typically weighing 194g (6.8 oz) for a men's size 8 (UK)

You can buy them in the UK soon, just £450 and I suspect they'll disintegrate quickly... https://www.adidas.co.uk/adizero-adios-pro-evo-3-shoes/KH767...

If anyone's interested, the shoe being purchasable by the general public is a condition of them being deemed legal for pros, after a crackdown on Supershoes a few years ago.

The other conditions as I recall are there is only allowed to be one carbon plate in them and a maximum stack height of 40mm.

It really is incredible that Nike kicked off this Supershoe arms race ten years ago and spent (presumably) an incredible amount on R&D, marketing and hype to try and complete the mission of being the first shoe to go Sub-2, and Adidas has pipped them at the last minute... twice in one race. Oh to be a fly on the wall at HQ today...

Though I assume they made a lot of that cash back in the interim selling these things to weekend warrior suckers like myself!


Most superfoam shoes actually last longer than older EVA-based foams:

> Improved durability: Supercritical foaming produces a more consistent cell structure in a midsole. This should translate to pressure and weight being more evenly distributed, which should lead to greater durability of the midsole. “We’ve done a lot of testing of what foams look like on a dynamic impactor fresh versus 300 or 500 miles later, and we see less degradation in those materials longer-term,” FitzPatrick says.

> At least in terms of the midsole’s life span, super foams may have done away with the conventional benchmark that running shoes last about 300 miles. “I think it’s a dated standard,” Caprara of Brooks says. “It’s an easy go-to to help simplify. But every foam is different, and it’s not just the foam—it’s how it’s constructed, the shoe’s geometry, the rubber underneath it. There are so many factors. If I were to tell you the Glycerin Max lasts 300 miles, that’s probably less accurate than it is accurate. It’s probably closer to 500.”

https://www.runnersworld.com/gear/a64969945/secret-to-super-...


Much like the road bikes that cost as much as a sedan, unless you are competing on a world stage, these aren’t meant for you.

I’m sure someone will happily sell them to you if you enjoy wasting money.


You don't need to be competing on the world stage to enjoy some of the benefits of Alpha flys or those pumas. 500 for the new Adidas does seem a little silly though.

While the foam may last longer than older EVA foam shoes, the outsoles of the shoes have gotten ridiculously thin these days.

The continental rubber outsole on these Adidas Adios Pro EVO 3 shoes are so thin (less than two sheets of paper, I think), that they don't even appear in side/profile views of the shoes. The outsole doesn't even extend the length of the entire shoe, it stops around the middle of the shoe. So heel strikers aren't welcome and will have loads of fun in wet weather. see https://www.adidas.com/us/adizero-adios-pro-evo-3/KH7678.htm...

In general, these high stack, forward-leaning shoes are meant for going straight ahead - imagine ladies' high heel shoes with an inch and a half of foam on the bottom - any sharp turns will force the runner to slow down or they'll twist their ankles. Looking at the London Marathon course, https://www.londonmarathonevents.co.uk/london-marathon/cours..., there's about twenty ninety-degree or sharper turns.


> unless you are competing on a world stage, these aren’t meant for you.

There’s a lot of people trying to get a 3 hour marathon or some other goal where chasing the marginal gains is worth the cost to them.


What sort of gain would that be for a non-world class runner? I'm unfamiliar with high level running, but I'm curious as in most sports these sort of things provide a small benefit at the top level (seems to be about a ~3% reduction in times over the past decade since the shoe wars began), and that quickly becomes statistical noise outside of the top due to diminishing returns.

But if you really want to reduce your marathon time by 15 minutes, then gaining a few minutes from better shoes, a few minutes from a high altitude training camp/holiday in Flagstaff/Dolomites, and a few minutes from a day at a gait analysis centre, may be worthwhile - or atleast a fun way to spend money on your hobby.

10% improvement on a 5 hour marathon time is more absolute seconds than on a 2.1hr marathon time.

But if you could only achieve it by adding the shoe isn't that a bit hollow?

If you are a 3:02 marathoner in normal shoes then run a 3:00 in a super shoe, you are still a 3:02 marathoner in normal shoes.


100 g of carbs is 400 calories, not 100.

Re [0] how do they measure this reliably during a race, especially the C-isotopes in the breath?

From the picture it looks like he is only wearing a watch and there is perhaps a little bulge on his left side.


They collected samples every 5k in the 30k long run before the marathon, see Substrate section in http://athlete.maurten.com/

Not to forget the sodium bicarbonate loading 2h before the start.

Which is why this feels so artificial and why it’s the 3rd most read article on the front page of the FT. Running as a sport has been very sadly and irremediably gentrified, gone are the days of Zatopek and of Abebe Bikila winning an Olympic marathon barefooted. Fuck Ineos and its owner, too, while I’m at it.

Airlines are down there amongst cinema chains and video game retail stores in terms of being terrible businesses

Want to know the easiest way to become a millionaire?

First, become a billionaire. Then, start an airline.


Amazon and Google get discounts because they bring more than just cash and help solve a very immediate problem for Anthropic

Great position to be in if you're Amazon and Google


Google cloud also need to be able to offer Anthropic models on Vertex otherwise they just won't be competitive.

Microsoft is in the same boat with Azure.


Google Cloud also needs to show constant quarterly growth so what better way than simply buying it and fudging the numbers?

compartmentalize. I do development and anything finance / crypto related / sensitive on separate machines.

If you're brave you can run whonix.

The issue is developers who have publish access to popular packages - they really should be publishing and signing on a separate machine / environment.

Same with not doing any personal work on corporate machines (and having strict corp policy - vercel were weak here).


knee-jerk is that it's weird, but makes sense:

* X will have a total of ~2GW of GPU sometime this year largely not doing much outside of 'grok is this true'

* despite no longer being in vogue with consumer devs Cursor still has a lot of developer data that can assist in building a model

* Cursor have decent enterprise relationships (while for xAI it is ~zero) and that's where the real revenue for llms + agents is

* Cursor are paying retail for tokens and competing against the frontier model co's who are also their suppliers. Not sustainable (hence their in-house composer model).

* Cursor the product covers the gamut from lovable-style prompt-to-app, an IDE, cli and bugbot

* X are using "x bucks" to pay for a potential later acquisition which are arguably overvalued based on the space x IPO hype

Option there to give X a window to make it work, otherwise walk away with a $10B breakup fee for access to it's data


> largely not doing much outside of 'grok is this true'

Hey now, don't forget about it's super important other use, taking innocent photos of people and regenerating them in less clothing and compromising positions.

I'm sad that I even know that.


They changed that recently, you need to be paying €10/mo for that now. The free plan and/or access for the basic Twitter plan are gone.

That doesn't make it better! It did somehow slow down the regulatory response because politicians are dumb, though.

It means X can identify users at least, so they are probably quite a bit less likely to do that.

You’ve obviously never attempted to complete a purchase while working under a regulatory body, required to test the theory.

What difference does that make?

it's much funnier now, that by putting it behind a paywall, they're explicitly saying "it's okay for you to do this, you just have to purchase a license first"

Security through enshittification. Nice.

Photoshop has been able to do that for 25 years. Do people realize that AI doesn’t magically know what their real bodies look like? AI is just pasting the averaged body parts from every porn image on top of what you were wearing. I know people love to be offended, but it’s weird to me that they’ve made up a right to not have someone privately mess with an image that has your face in it.

Maybe if this tech were completely secret and this was 1997, so a video of a naked Bill Clinton high-fiving Saddam Hussein in a hot tub was likely to shock the world, then it would be a big deal. But everyone knows all images (and especially surprising ones) are likely to be AI, I’m asking sincerely, does it really matter if people make fake photos for wanking purposes?


I wouldn't be surprised if those enterprise relationships evaporate after this acquisition. There's a reason why xAI has zero enterprise customers.

> There's a reason why xAI has zero enterprise customers

I’m curious where you pull these stats from


I've had hundreds of AI-powered vendor tools come across my desk as part of my job, and I have yet to see a single one that uses Grok. I'm also not aware of any publicly announced customers for Grok's enterprise offering. The Grok Enterprise website doesn't list any customers.

For Enterprises it's way easier to delist Cursor from the list of used tools than to have a relation with someone known publicly for neofascist aspirations.

xAI is not, and was not that bad, it's just everybody ignores it for anything serious due to obvious reasons.


I am not, I have a Grok subscription and I find Grok genuinely useful.

I hate Musk, but Grok is not a bad LLM. It's very useful for tracking down old magazines that I'm looking for, which are public domain or copyright orphans. Often the major players will outright reject searching places like archive.org as they immediately assume you are trying to commit some level of copyright infringement. With Grok it'll either just do it, or do it with a mild prod or jailbreak.

You've literally got tools like opencode that are MIT licensed. Most of those points X could do on their own or are things that make this attractive for cursor not X.

e.g. Need developer data? Use some of that spare GPU compute, hand out free top end model coding access for a bit and you'll very rapidly have developer data

>decent enterprise relationships

I guess. 60B worth of "relationships" though?


> hand out free top end model coding access for a bit and you'll very rapidly have developer data

They tried this - grok was free on openrouter for a while


Marketing push was there too, everyone was saying Grok had jumped Claude and Codex, yet I never got that when using all 3.

Turns out that benchmaxxing doesn't help if it's not very good when people actually try it.

It's more useful to have access your full code base compared to having access to only your input and the output they generate.

But imagine if they handed out free access to Kimi or GLM-5. Actually, I still wouldn't use it, because I avoid APIs that say they hold on to data.

And presumably they got data from it...

and then released a model that didn't really leave a mark with code performance

But if the developers are to presumably use the model you give out, what data are you going to get from them thats useful?

I don't know - was GP speculating that there is value there on a scale to justify 60B no me

Yes I think you're right. Reinforcement learning is extremely compute heavy, which cursor doesn't have. And X.ai doesn't have the coding agent data anthropic/OpenAI has, but does have the compute.

However, one thing in AI is that while the usage goes up extremely quickly, it tends to go down just as fast. I know a lot of companies that are in the process of switching from Cursor to Claude Code, so in 6-12 months I'm not entirely sure of the data quality/quantity.

Also I think it is telling that they are calling them SpaceX not X. The X brand is absolutely toxic, especially in enterprise.


> Also I think it is telling that they are calling them SpaceX not X. The X brand is absolutely toxic, especially in enterprise.

it might not help all that much once it turns into "grok" harness or otherwise associated with elon


doing much outside of 'grok is this true'

Hey...don't forget "grok is this person jewish(hint hint)"...or..just "grok do your thing"

https://www.threads.com/@trosen76/post/DTlYw7sFXvR


I think you're right. Other providers can offer coding subscriptions that use in-house models, and this sets the stage for a Grok coding plan that's built in to Cursor.

$60 billion seems expensive, but it gives them a much better chance at competing in the market than if they started their own harness from scratch.


Absolutely no enterprise - I work in enterprise cloud consulting - absolutely no company would trust Grok with their IP compared to Anthropic or OpenAI with Musk’s reputation on how he runs his businesses.

Anthropic just tolerates the money losing developers who pay $20/$200 for subscriptions.


They'll sign a contract, and the contract will be very clear about whether using user prompts as training data is allowed or not. They're not going to care much about reputation; they'll care about the terms they sign with.

I don't get the sense that Elon's companies care much for the contracts they sign.

e.g. https://arstechnica.com/tech-policy/2022/12/twitter-stiffs-s...

I wouldn't trust a contract from one of Elon's companies unless they were willing to put in escrow an amount that would make me whole in case of a breach on their side. (And that amount would be quite large in the case of a potential breach involving using prompt data for training.)


Maybe the play here is a way to sneak sneak Grok into enterprise by calling it Cursor. Or they'll just give up on it and run Cursor's fine-tuned Kimi on Colossus.

You forgot to consider whether all this is worth $60B.

> forgot to consider whether all this is worth $60B

I see two possibilities:

(1) SpaceX is paying with stock; and

(2) the $60bn pay-out is (a) conditional or (b) never going to be exercised—it was a stalking horse for negotiating the $10bn terms, which gives SpaceX everything it actually wants.


I think both a) and b) can both be true. We dont know what the contingency is - could be something absurd.

Also one would definitely offer to pay in stock if they believe it is massively over-valued lmao.


$1B to $2B ARR in a few months with projection of $6B ARR by years end. If xAi wants to have it's own tools just like OpenAI and Anthropic, then it's not an unusual move.

Extrapolating from a few months to a full year and calling it Annual Recurring Revenue is one of modern startup valuation gimmicks that I cannot not laugh at.

Sometimes it helps to go back to the basics to understand company performance: money in, money out?


Sure, but early profit rarely tells the whole story. They already sell to half the fortune 500 and enterprise is sticky. Gains in efficiency, like a discount on data center access, can be remarkable for their profit outlook.

in these cases arr = annual run rate, commonly used when your revenue is either going vertical (cursor - good) or your revenue is choppy and full of short term projects (mercor - bad)

it's not dollars it's X bucks

I think it also represents a bet that in some sense Cursor's model capabilities are resource limited rather than talent limited. If that's true, $60B will end up being a bargain. If not true, well it's an expensive lesson but that's the nature of things.

Forgot that claude is burning good will from it's own capacity constraints, leading to periods of 'dumbness'. It's a catalyst to cause me and others to switch back to cursor if they can get their act together

> despite no longer being in vogue with consumer devs Cursor still has a lot of developer data that can assist in building a model

care to share more about this?


> Cursor still has a lot of developer data that can assist in building a model

Their composer model is seriously good. I’ve been eyeing a cursor sub just to use it in OpenCode. They have a nice moat here.

> Cursor have decent enterprise relationships (while for xAI it is ~zero)

That has a reason. Those enterprise relationships are almost certainly going to sour at least a bit, if not for Musk‘s toxic image then for his erratic behavior.


Just to point it out, Cursor has not made any good models themselves. Composer 2 is Kimi K2.5, and they tried to pass it as their own until people noticed that the api specified it as Kimi.

Cursor has released a technical paper [1] and several blog posts [2] describing the continued pretraining and RL they do on top of Kimi K2.5.

It is true that they were not transparent about the base model that they used until the model slug was discovered by a Twitter user via the API.

[1]: https://arxiv.org/abs/2603.24477 [2]: https://cursor.com/blog/real-time-rl-for-composer


I’ve tried Kimi a bit (not much to be fair) and didn’t find it to be that great. Composer feels more „real world“ capable to me, which is not a big surprise since Cursor has all the data to tune it.

Kimi is the base, but they've done tons of finetuning on top to produce a really good completions model.

Yeah, Composer 2 is legitimately so impressive. It is my daily driver right now both on professional and personal projects. I only find myself reaching for 5.3 Codex/GPT 5.4 when exploring a lot of technical documentation or code and for Sonnet/Opus when working on UI. Everything else is Composer.

Even if it wasn't for Musk, are these relationships really worth so much? There is a certain value in being on the approved vendor list, but it seems to me that there really isn't a lot of vendor lock-in. I think most people could switch to opencode, claude code or codex pretty easily. Maybe these relationships would be worth a lot if companies signed long-term contracts, but I doubt many did.

Why not buy OpenCode subscription? You'll have access to better models. glm5.1 and qwen3.6 have been really good for me

> despite no longer being in vogue with consumer devs

Is it in vogue with enterprise devs?


British?

“Cursor have” and “Cursor are” is awkward to read.


Now you know what it feels like to be British reading practically any other English source on the Internet.

That's not British, that's just old people


No, I'm claiming your source is outdated. It has become an old people thing now

hn is this true

Claude Code defaulting to a certain set of recommended providers[0] and frameworks is making the web more homogenous and that lack of diversity is increasing the blast radius of incidents

[0] https://amplifying.ai/research/claude-code-picks/report


It's interesting how many of the low-effort vibecoded projects I see posted on reddit are on vercel. It's basically the default.

Reddit vibecoded LLM posts are kind of fascinating for how homogenous they are. The number of vibe coded half-finished projects posted to common subreddits daily is crazy high.

It’s interesting how they all use LLMs to write their Reddit posts, too. Some of them could have drawn in some people if they took 5 minutes to type an announcement post in their own words, but they all have the same LLM style announcement post, too. I wonder if they’re conversing with the LLM and it told them to post it to Reddit for traction?


I find that often the developers of these apps don't speak English, but want to target an English-speaking audience. For the marketing copy, they're using the LLM more to translate than to paraphrase, but the LLM ends up paraphrasing anyway.

I think they simply just haven't figured out that the barrier to entry is so low, that no one really cares what their app can do, even if does something genuinely useful.

> For the marketing copy, they're using the LLM more to translate than to paraphrase, but the LLM ends up paraphrasing anyway.

What do you see as the distinction between "translating" and "paraphrasing"? All translations are necessarily paraphrased.


While that’s true, translations often vary in terms of how faithful they are to the source vs how idiomatic they are in the target language. Take for example the French phrase “j’ai fait une nuit blanche”, which literally means “I did a white night”. Clearly that’s a bad translation. A more natural translation might be “I pulled an all-nighter”.

Similarly, “j’ai un chat dans la gorge” probably translates best as “I’ve got a frog in my throat”, even though it’s a completely different animal, it’s an obvious mapping.

Those are fairly simple because they have neat English translations, but what about for example “C’est pas tes oignons”, which literally means “these aren’t your onions”, but is really a way of telling someone it’s none of their business. You could translate it as “it’s none of your business”, or “keep your nose out” or “stay in your lane” or lots and lots of other versions, with varying levels of paraphrasing, which depend on context you can’t necessarily read purely from the words themselves.


I'll preface this by noting that I don't disagree with anything you've said, but I do have some comments:

> Similarly, “j’ai un chat dans la gorge” probably translates best as “I’ve got a frog in my throat”, even though it’s a completely different animal, it’s an obvious mapping.

Those obvious mappings can sometimes be too seductive for the translator's good. One example is that people translating English-loanwords-in-a-foreign-language into English usually can't help but translate them as the original English word.

Another example is that, in China, there is a cultural concept of a 狐狸精, which you might translate as "fox spirit". (The "fox" part of the translation is straightforward, but 精 is a term for a supernatural phenomenon, and those are difficult to translate.) They can do all kinds of things, but one especially well-known behavior is that they may take the form of human women and seduce (actual) human men. This may or may not be harmful to the man.

Because of this concept, the word also has a sense in which it may be used to insult a (normal) woman, accusing her of using her sex appeal toward harmful ends.

Chinese people translating this into English almost always use the word "vixen", which is, to be fair, a word that may refer to a sexy human woman or to a female fox. But I really don't feel that they're equivalent, or even that they have much overlap. (Unlike the situation with English loanwords, I think native speakers of Chinese are much more likely to choose this translation than native speakers of English are.)

> what about for example “C’est pas tes oignons”, which literally means “these aren’t your onions”

The form closest in structure to that would probably be "none of your beeswax", which is just a minorly altered version of "none of your business". I assume the substitution of "beeswax" is humorous and based on phonetic similarity.

As you note, there are multiple dimensions relevant to translating this and several positions you could take along each. For this particular idea, I would say the two most important dimensions are playfulness and rudeness; it's a very common idea and the language is rich in options for both.

> translations often vary in terms of how faithful they are to the source vs how idiomatic they are in the target language. Take for example the French phrase “j’ai fait une nuit blanche”, which literally means “I did a white night”. Clearly that’s a bad translation. A more natural translation might be “I pulled an all-nighter”.

This isn't what I had in mind. Here are some idiomatic translations:

I pulled an all-nighter.

I was up all night.

I didn't get any sleep.

I never got to bed.

I've been up since [something appropriate to the context].

[Something appropriate to the context] kept me up all night.

I wouldn't call any of the first four "more paraphrased" than the others. (The last two might be, if they included extra information.) If these were reports of the English speech of some other person, one of them (or less) would be a quote, and the others would be paraphrases. But as a report of French speech, they're all paraphrases. The first shares a little more grammatical structure with the French, which doesn't really mean much.

For a fairly similar example from my personal life, someone said to me 这是我第一次听说, and my spontaneous translation of it was "I've never heard that before", despite the fact that there is technically a perfectly valid English expression "this is the first I've heard of that".

What's closer to the grammatical structure of the Chinese? That's hard for me to say. You could analyze 我 as the subject of 听说, and I lean toward that analysis, but my instincts for Mandarin are weak. You might see 我 as being more strongly attached to 第一次, meaning something more like "my first time (to hear ...)" than "I hear (for the first time) ...".

But for whatever it's worth, a word by word literal gloss would be "this is me first time hear".

Between languages with less historical interaction than English and French, it's quite possible that a syntax-preserving translation of some sentence just doesn't exist.


They are not exclusive to reddit. HN has also been full of vibe submissions of the same nature.

It's insane how most of the dev subreddits are filled with slop like this. I've thought the same thing - why can't they even spend 5 minutes to write their own post about their project?

Yeah, in the last 6 to 10 months /r/rust has become littered with this stuff. There's still some good discussion going on but now I have to sort through garbage. The signal to noise ratio is out of whack these days that I generally avoid platforms like Substack, Medium and so on too.

next, vercel, and supabase is basically the foundation of every vibecoded project by mere suggestion.

If this kind of vulnerability exists at the platform level, imagine how vulnerable all the vibe-coded apps are to this kind of exploit.

I don't doubt the competence of the Vercel team actually and that's the point. Imagine if this happens to a top company which has their pick of the best engineers, on a global scale.

My experience with modern startups is that they're essentially all vulnerable to hacks. They just don't have the time to actually verify their infra.

Also, almost all apps are over-engineered. It's impossibly difficult to secure an app with hundreds of thousands of lines of code and 20 or so engineers working on the backend code in parallel.

Some people are like "Why they didn't encrypt all this?" This is a naive way to think about it. The platform has to decrypt the tokens at some point in order to use them. The best we can do is store the tokens and roll them over frequently.

If you make the authentication system too complex, with too many layers of defense, you create a situation where users will struggle to access their own accounts... And you only get marginal security benefits anyway. Some might argue the complexity creates other kinds of vulnerabilities.


the vibe coders don't know what they don't know so whatever code is written on their behalf better be up to best practices (it isn't)

They’re all shit too. All three decided to do custom auth instead of OIDC and it’s a nightmare to integrate with any of them.

Maybe that's why all these vibe coded slop apps also use Clerk for auth alongside Supabase etc

10 years ago it was Heroku and Three.js.

10 years ago it was Heroku and Ruby on Rails*

but now Ruby on Rails is not a circus like how Next.js is.

see [0]: Rails security Audit Report

[0]: https://ostif.org/ruby-on-rails-audit-complete/


Can you elaborate as to why your linked article would suggest nextjs to be a "circus"? If anything, DHH (founder of RoR) and his Looney-Tunes opinions are much closer to a circus sideshow than anything I've seen come out of Vercel.

More like 15. By 2016, Rails was supposedly dead and we were all going to be running the same code on the front end and back end in a full stack, MongoDB euphoria.

New one coming in 5 years. Cycle repeats itself.

I don't think so, AIs are going to freeze the tooling to what we have today since that's what's in the training corpus, and it's self reinforcing.

Nah, the good LLMs can generally web search and read documentation well enough that the fact that pre-training isn’t up to the minute is not a serious concern. Badly-documented projects are more of a concern, but they weren’t likely to get much pre-AI usage either.

I've done a ton of low-effort vibe-coded projects that suit my exact use cases. In many cases, I might do a quick Google search, not find an exact match, or find some bloated adware or subscription-ware and not bother going any further.

Claude Code can produce exactly what I want, quickly.

The difference is that I don't really share my projects. People who share them probably haven't realized that code has become cheap, and no one really needs/wants to see them since they can just roll their own.


The kind of code, with the kind of quality, that LLMs can output has become cheap. Learning has not, and neither has genuinely well designed, human designed, code. This might be surprising to the majority of users on HN, but once a really good programmer joins your team, who is both really good, and also uses LLMs to speed up the parts that he or she isn't good at, you really learn how far away vibe coders are from producing something worth using.

There's a push and pull here, Typescript + React + Vercel are also very amenable to LLM driven development due to a mix of the popularity of examples in the LLMs dataset, how cheap the deployment is and how quick the ecosystem is to get going.

Another Anthropic revenue stream:

Protection money from Vercel.

"Pay us 10% of revenue or we switch to generating Netlify code."


Wouldn’t Vercel still make money in that scenario since Netlify uses them?

Netlify uses AWS (and Cloudflare? Vercel def uses Cloudflare)

Netlify and Vercel both use AWS. AFAIK neither uses Cloudflare. Vercel did use Cloudflare for parts of its infra until about a year ago though.

Ah, ok. I knew they did use Cloudflare but had no idea they migrated off of it.

Cloudflare CEO treated Vercel CEO too badly in public, he needed to migrate off to save face.

Vercel runs on AWS.

Which PaaS are running on their own servers and earning a profit?

The other day, I was forcing myself to use Claude Code for a new CRUD React app[1], and by default it excreted a pile of Node JS and NPM dependencies.

So I told something like, "don't use anything node at all", and it immediately rewrote it as a Python backend, and it volunteered that it was minimizing dependencies in how it did that.

[1] only vibe coding as an exercise for a throwaway artifact; I'm not endorsing vibe coding


> forcing myself to use Claude Code

You don't have to live like this.


Even though I'm a hardcore programmer and software engineer, I still need to at least keep aware of the latest vibe coding stuff, so I know what's good and bad about it.

You can tell Claude to use something highly structured like Spring Boot / Java. It's a bit more verbose in code, but the documentation is very good which makes Claude use it well. And the strict nature of Java is nice in keeping Claude on track and finding bugs early.

I've heard others had similar results with .NET/C#


Spring Boot is every bit as random mystery meat as Vercel or Rails. If you want explicit then use non-Boot Spring or even no Spring at all.

Asp.net 10 and vertical slice architecture is good and clean

Same for Go.

My vibe coded one-off app projects have are all, by default, "self-contained single file static client side webapp, no build step, no React or other webshit nonsense" in their prompt. For more complex cases, I drop the "single file". Works like a charm.

I'm struggling to understand how they bought Bun but their own Ai Models are more fixated in writing python for everything than even the models of their competitor who bought the actual Python ecosystem (OAI with uv)

You wanted it to use React but not node? Am I missing something here?

You can use React without Node by using a CDN. You can even use JSX if you use Babel in a script tag. It's just inefficient and stupid as hell.

It emits Actix and Axum extremely well with solid support for fully AOT type checked Sqlx.

Switch to vibe coding Rust backends and freeze your supply chain.

Super strong types. Immaculate error handling. Clear and easy to read code. Rock solid performance. Minimal dependencies.

Vibe code Rust for web work. You don't even need to know Rust. You'll osmose it over a few months using it. It's not hard at all. The "Rust is hard" memes are bullshit, and the "difficult to refactor" was (1) never true and (2) not even applicable with tools like Claude Code.

Edit: people hate this (-3), but it's where the alpha is. Don't blindly dismiss this. Serializing business logic to Rust is a smart move. The language is very clean, easy to read, handles errors in a first class fashion, and fast. If the code compiles, then 50% of your error classes are already dealt with.

Python, Typescript, and Go are less satisfactory on one or more of these dimensions. If you generate code, generate Rust.


Ok I mean this is a little crazy, "minimal dependencies" and Rust? Brother I need dependencies to write async traits without tearing my hair out.

But you're also correct in that Rust is actually possible to write in a more high level way, especially for web where you have very little shared state and the state that is shared can just be wrapped in Arc<> and put in the web frameworks context. It's actually dead easy to spin up web services in Rust, and they have a great set of ORM's if thats your vibe too. Rust is expressive enough to make schema-as-code work well.

On the dependencies, if you're concerned about the possibility of future supply chain attacks (because Rust doesn't have a history like Node) you can vendor your deps and bypass future problems. `cargo vendor` and you're done, Node has no such ergonomic path to vendoring, which imo is a better solution than anything else besides maybe Go (another great option for web services!). Saying "don't use deps" doesn't work for any other language other than something like Go (and you can run `go vendor` as well).

But yeah, in today's economy where compute and especially memory is becoming more constrained thanks to AI, I really like the peace of mind knowing my unoptimised high level Rust web services run with minimal memory and compute requirements, and further optimisation doesn't require a rewrite to a different language.

Idk mate, I used to be a big Rust hater but once I gave the language a serious try I find it more pleasant to write compared to both Typescript and Go. And it's very amiable to AI if that's your vibe(coding), since the static guarantees of the type system make it easier for AI to generate correct code, and the diagnostics messages allow it to reroute it's course during the session.


How are you getting low dependencies for Web backend with Rust? (All my manually-written Rust programs that use crates at all end up pulling in a large pile of transitive dependencies.)

Cargo is just as vulnerable as NPM. It's just a smaller, more difficult target.

Except with using Rust like this you're using it like C#. You don't get to enjoy the type system to express your invariants.

> Python

I once made a golang multi-person pomodoro app by vibe coding with gemini 3.1 pro (when it had first launched first day) and I asked it to basically only have one outside dependency of gorrilla websockets and everything else from standard library and then I deployed it to hugging face spaces for free.

I definitely recommend golang as a language if you wish to vibe code. Some people recommend rust but Golang compiles fast, its cross compilation and portable and is really awesome with its standard library

(Anecdotally I also feel like there is some chances that the models are being diluted cuz like this thing then has become my benchmark test and others have performed somewhat worse or not the same as this to be honest and its only been a few days since I am now using hackernews less frequently and I am/was already seeing suspicions like these about claude and other models on the front page iirc. I don't know enough about claude opus 4.7 but I just read simon's comment on it, so it would be cool if someone can give me a gist of what is happening for the past few days.)


It's a good point, but I don't think the problem here is Claude. It's how you use it. We need to be guiding developers to not let Claude make decisions for them. It can help guide decisions, but ultimately one must perform the critical thinking to make sure it is the right choice. This is no different than working with any other teammate for that matter.

That's not helped by a recent change to their system prompt "acting_vs_clarifying":

> When a request leaves minor details unspecified, the person typically wants Claude to make a reasonable attempt now, not to be interviewed first. Claude only asks upfront when the request is genuinely unanswerable without the missing information (e.g., it references an attachment that isn’t there).

> When a tool is available that could resolve the ambiguity or supply the missing information — searching, looking up the person’s location, checking a calendar, discovering available capabilities — Claude calls the tool to try and solve the ambiguity before asking the person. Acting with tools is preferred over asking the person to do the lookup themselves.

> Once Claude starts on a task, Claude sees it through to a complete answer rather than stopping partway. [...]

In my experience before this change. Claude would stop, give me a few options and 70% of the time I would give it an unlisted option that was better. It actually would genuinely identify parts of the specs that were ambiguous and needed to be better defined. With the new change, Claude plows ahead making a stupid decision and the result is much worse for it.


No, the problem is the people building and selling these tools. They are marketed as a way of outsourcing thinking.

So what are you suggesting do not allow companies to sell such tools?

I'm suggesting people shouldn't lie to sell things because their customers will believe them and this causes measurable harm to society.

AI does outsource thinking. It is not a lie.

If you don't tend to think much in the first place or have low expectations, then yes

I think if you believe that you're either lying or experiencing psychosis. LLMs are the greatest innovation in information retrieval since PageRank but they are not capable of thought anymore than PageRank is.

I think most people would agree.

However it is less clear on how to do this, people mostly take the easiest path.



Eternal Sloptember

I guess engineers can differentiate their vibecoded projects by selecting an eccentric stack.

Choosing an eccentric stack makes the llms do better even. Like Effect.ts or Elixir

I actually noticed the same. Having it work on Mithril.js instead of React seems (I know it's all just kind of hearsay) to generate a lot cleaner code. Maybe it's just because I know and like Mithril better, but also is likely because of the project ethos and it's being used by people who really want to use Mithril in the wild. I've seen the same for other slightly more exotic stacks like bottle vs flask, and telling it to generate Scala or Erlang.

That makes sense. There's less training data but it is better training data. LLMs were trained on really bad pandas code, so they're really really good at generating bad pandas. Elixer, there's less of it, but what there is, is higher quality, so then what it outputs is off higher quality too.

That's been my experience as well. Claude code does better with Elixir (plus I enjoy working on the code better after :) )

> a. Actually do something sane but it will eat your session

> b. (Recommended) Do something that works now, you can always make it better later


Shouldn’t Claude just refuse to make decisions, then, if it is problematic for it to do so? We’re talking about a trillion dollar company here, not a new grad with stars in their eyes

It's just an LLM.

The thing I can’t stop thinking about is that Ai is accelerating convergence to the mean (I may be misusing that)

The internet does that but it feels different with this


> convergence to the mean

That's a funny way of saying "race to the bottom."

> The internet does that but it feels different with this

How does "the internet do that?" What force on the internet naturally brings about mediocrity? Or have we confused rapacious and monopolistic corporations with the internet at large?


I'd call it race to the median, converging to mediocrity, or what the kids would call "mid"

> How does "the internet do that?"

Stack exchange. Google.


Please explain how these cause a "convergence to the mean."

I assume they’re saying that the most common and popular solutions propagate, power-law style. LLMs just amplify that loop.

Indeed 'race to the bottom' seems more like capitalism in general.

Yes, this is a genuine problem with AI platforms. It does sometimes feel like they're suspiciously over-promoting certain solutions; to the point that it's not in the AI platform's interest.

I know what it's like being on the opposite side of this as I maintain an open source project which I started almost 15 years ago and has over 6k GitHub stars. It's been thoroughly tested and battle-tested over long periods of time at scale with a variety of projects; but even if I try to use exact sentences from the website documentation in my AI prompt (e.g. Claude), my project will not surface! I have to mention my project directly by name and then it starts praising it and its architecture saying that it meets all the specific requirements I had mentioned earlier. Then I ask the AI why it didn't mention my project before if it's such a good fit. Then it hints at number of mentions in its training data.

It's weird that clearly the LLM knows a LOT about my project and yet it never recommends it even when I design the question intentionally in such a way that it is the perfect fit.

I feel like some companies have been paying people to upvote/like certain answers in AI-responses with the intent that those upvotes/likes would lead to inclusion in the training set for the next cutting-edge model.

It's a hard problem to solve. I hope Anthropic finds a solution because they have a great product and it would be a shame for it to devolve into a free advertising tool for select few tech platforms. Their users (myself included) pay them good money and so they have no reason to pander to vested interests other than their own and that of their customers.


> It's weird that clearly the LLM knows a LOT about my project and yet it never recommends it even when I design the question intentionally in such a way that it is the perfect fit.

That's literally what "weight" means - not all dependencies have the same %-multiplier to getting mentioned. Some have a larger multiplier and some have a smaller (or none) multiplier. That multiplier is literally a weight.


It's so trivial to seed. LLMs are basically the idiots that have fallen for all the SEO slop on Google. Did some travel planning earlier and it was telling me all about extra insurances I need and why my normal insurance doesn't cover X or Y (it does of course).

This is why Im glad I learned to code before vibecoding. I tell codex exactly what tools and platforms to use instead of letting it default to whatever is the most popular, and I guard my .env and api keys carefully. I still build things page by page or feature by feature instead of attempting to one shot everything. This should be vibe-coding 101.

$ Good idea! Let's add a Redis cache to that!

That report greatly overrates the tendency to default for Vercel for web because among its 2 web projects it mandated one use Next.js and the other one to be a React SPA as well. Obviously those prime Claude towards Vercel. They shouldve had the second project be a non-React web project for diversity.

Yeah, I’ve been tracking what devtools different models choose: https://preseason.ai

Interstingly, a recent conversation [1] between Hank Green and security researcher Sherri Davidoff argued the opposite. More GenAI generated code targeted at specific audiences should result in a more resilient ecosystem because of greater diversity. That obviously can't work if they end up using the same 3 frameworks in every application.

[1] https://www.youtube.com/watch?v=V6pgZKVcKpw


I love Hank, but he has such a weird EA-shaped blind spot when it comes to AI. idgi

It is true that "more diversity in code" probably means less turnkey spray-and-pray compromises, sure. Probably.

It also means that the models themselves become targets. If your models start building the same generated code with the same vulnerability, how're you gonna patch that?


> start building the same generated code with the same vulnerability

This situation is pretty funny to me. Some of my friends who arent technical tried vibe coding and showed me what they built and asked for feedback

I noticed they were using Supabase by default, pointed out that their database was completely open with no RLS

So I told them not to use Supabase in that way, and they asked the AI (various diff LLMs) to fix it. One example prompt I saw was: please remove Supabase because of the insecure data access and make a proper secure way.

Keep in mind, these ppl dont have a technical background and do not know what supabase or node or python is. They let the llm install docker, install node, etc and just hit approve on "Do you want to continue? bash(brew install ..)"

Whats interesting is that this happened multiple times with different AI models. Instead of fixing the problem the way a developer normally would like moving the database logic to the server or creating proper API endpoints it tried to recreate an emulation of Supabase, specifically PostgREST in a much worse and less secure way.

The result was an API endpoint that looked like: /api/query?q=SELECT * FROM table WHERE x

In one example GLM later bolted on a huge "security" regular expression that blocked , admin, updateadmin, ^delete* lol


As a general hobbyist-type, I can attest to the above post, it is 100% valid and accurate.

This entire process is something anyone can test and reproduce; I was definitely steered towards both vercel and supabase by gemini. It isn't model specific.


> The result was an API endpoint that looked like: /api/query?q=SELECT * FROM table WHERE x

Ahhhhhhhgh. If I ever make that cybersecurity house of horrors, that's going in it


Is that bad? I would think having everyone on the same handful of platforms should make securing them easier (and means those platforms have more budget to to so), and with fewer but bigger incidents there's a safety-of-the-herd aspect - you're unlikely to be the juiciest target on Vercel during the vulnerability window, whereas if the world is scattered across dozens or hundreds of providers that's less so.

When everyone uses the same handful of platforms, then everyone becomes the indirect target and victim of those big incidents. The recent AWS and Cloudflare outages are vivid examples. And then the owners of those platforms target everyone with their enshittification as well to milk more and more money.

I'm not against making agents scapegoats, but this is a problem found among humans as well.

That's only looking at half of the equation.

That lack of diversity also makes patches more universal, and the surface area more limited.


"Nobody ever got fired for putting their band page on MySpace."

That's the irony of Mythos. It doesn't need to exist. LLM vibe slop has already eroded the security of your average site.

Self fulfilling prophecy: You don't need to secure anything because it doesn't make a difference, as Mythos is not just a delicious Greek beer, but also a super-intelligent system that will penetrate any of your cyber-defenses anyway.

In some ways Mythos (like many AI things) can be used as the ultimate accountability sink.

These libraries/frameworks are not insecure because of bad design and dependency bloat. No! It's because a mythical LLM is so powerful that it's impossible to defend against! There was nothing that could be done.


Mythos is the new DDoS or “state-level actors”.

Explain more about this beer.


Conspiracy theory: they intentionally seeded the world with millions of slop PRs and now they’re “catching bugs” with Mythos

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: