Obviously it’s higher. If it was any lower, they would’ve made a huge announcement out of it to prove they’re better than the competition.
I’m thinking otherwise. I think GPT5 is a much smaller model - with some fallback to previous models if required.
Since it’s running on the exact same hardware with a mostly similar algorithm, using less energy would directly mean it’s a “less intense” model, which translates into an inferior quality in American Investor Language (AIL).
And 2025’s investors doesn’t give a flying fuck about energy efficiency.
It’s safe to assume that any metric they don’t disclose is quite damning to them. Plus, these guys don’t really care about the environmental impact, or what us tree-hugging environmentalists think. I’m assuming the only group they are scared of upsetting right now is investors. The thing is, even if you don’t care about the environment, the problem with LLMs is how poorly they scale.
An important concept when evaluating how something scales is are marginal values, chiefly marginal utility and marginal expenses. Marginal utility is how much utility do you get if you get one more unit of whatever. Marginal expenses is how much it costs to get one more unit. And what the LLMs produce is the probably that a token, T, follows on prefix Q. So P(T|Q) (read: Probably of T, given Q). This is done for all known tokens, and then based on these probabilities, one token is chosen at random. This token is then appended to the prefix, and the process repeats, until the LLM produces a sequence which indicates that it’s done talking.
If we now imagine the best possible LLM, then the calculated value for P(T|Q) would be the actual value. However, it’s worth noting that this already displays a limitation of LLMs. Namely even if we use this ideal LLM, we’re just a few bad dice rolls away from saying something dumb, which then pollutes the context. And the larger we make the LLM, the closer its results get to the actual value. A potential way to measure this precision would be by subtracting P(T|Q) from P_calc(T|Q), and counting the leading zeroes, essentially counting the number of digits we got right. Now, the thing is that each additional digit only provides a tenth of the utility to than the digit before it. While the cost for additional digits goes up exponentially.
So, exponentially decaying marginal utility meets exponentially growing marginal expenses. Which is really bad for companies that try to market LLMs.
Well I mean also that they kinda suck, I feel like I spend more time debugging AI code than I get working code.
I only use it if I’m stuck even if the AI code is wrong it often pushes me in the right direction to find the correct solution for my problem. Like pair programming but a bit shitty.
The best way to use these LLMs with coding is to never use the generated code directly and atomize your problem into smaller questions you ask to the LLM.
“Just a few more trillion dollars bro, then itll be ready…” Like a junkie.
All the people here chastising LLMs for resource wastage, I swear to god if you aren’t vegan…
Animal agriculture has significantly better utility and scaling than LLMs. So, its not hypocritical to be opposed to the latter but not the former.
The holocaust was well scaled too. Animal ag is responsible for 15-20% of the entire planets GHG emissions. You can live a healthier, more morally consistent life if you give up meat.
What a stupid take.
It’s not, you’re just personally insulted. The livestock industry is responsible for about 15% of human caused greenhouse gas emissions. That’s not negligible.
So, I can’t complain about any part of the remaining 85% if I’m not vegan? That’s so fucking stupid. Do you not complain about microplastics because you’re guilty of using devices with plastic in them to type your message?
Yes, I’m a piece of shit for using a phone made by a capitalist corporation and contributes to harming the planet. I don’t deny that I live in a horrible society that forces me to be a bad human just to survive.
I also don’t call people stupid for telling me my device is bad for the environment. I still eat meat, I’m not a vegan, but I understand and completely agree that it’s terrible for the environment. By recognizing it, I can be conscious of my consumption and reduce it.
I also use LLMs conservatively, I use them where they add value and I don’t use them frivolously to generate shitty AI slop.
I’m conscious of its dangers and that drives my consumption of it.
But I don’t pick and choose. I don’t eat animal products three meals a day and bitch about someone using an LLM to edit a file instead of manually working on it for five hours.
Just be consistent is the message they were communicating, not that you shouldn’t complain about 85%.
Same, I’m very aware that my selfish actions cause harm to the environment and I do try to be conservative about meat, electricity, and water usage. I don’t even own a car.
But “I swear to God, if you aren’t vegan,” which is what OP said, is hardly the same as “keep it consistent.” It feels like they’re telling us both that our efforts are pointless because we aren’t vegan. They could have said, try cutting meat from your diet to help more, or give veganism a thought. It comes off as insufferably arrogant, you know?
I’ll end my rant now, haha. Sorry.
Sam Altman has gone into PR and hype overdrive lately. He is practically everywhere trying to distract the media from seeing the truth about LLM. GPT-5 has basically proved that we’ve hit a wall and the belief that LLM will just scale linearly with amount of training data is false. He knows AI bubble is bursting and he is scared.
Bingo. If you routinely use LLM’s/AI you’ve recently seen it first hand. ALL of them have become noticeably worse over the past few months. Even if simply using it as a basic tool, it’s worse. Claude for all the praise it receives has also gotten worse. I’ve noticed it starting to forget context or constantly contradicting itself. even Claude Code.
The release of GPT5 is proof in the pudding that a wall has been hit and the bubble is bursting. There’s nothing left to train on and all the LLM’s have been consuming each others waste as a result. I’ve talked about it on here several times already due to my work but companies are also seeing this. They’re scrambling to undo the fuck up of using AI to build their stuff, None of what they used it to build scales. None of it. And you go on Linkedin and see all the techbros desperately trying to hype the mounds of shit that remain.
I don’t know what’s next for AI but this current generation of it is dying. It didn’t work.
Any studies about this “getting worse” or just anecdotes? I do routinely use them and I feel they are getting better (my workplace uses Google suite so I have access to gemini). Just last week it helped me debug an ipv6 ra problem that I couldn’t crack, and I learned a few useful commands on the way.




