

The only reason I’m gonna be smart enough to bring water to concerts is because I read this thread.


The only reason I’m gonna be smart enough to bring water to concerts is because I read this thread.


It’s a reference to Arnold Palmer, whose estate tried (or threatened?) to sue them after they used the name “Armless Palmer” for a flavor.
Of course other billionaires would be thin-skinned enough to feel offended by that…


Where’s all those Christians who believe digital ID is the mark of the Antichrist?


It’s always interesting seeing the line people will draw between what they see as art vs product. I would be disappointed by anyone who tricked me into listening to theft-generated music, whether people consider it legitimate art or not


Alex Karp thinks people only care about one kind of surveillance. And he thinks he will alleviate our fears if he gives us a pinky promise not to surveil us in that one way.
That way is cheating.
He later brings this up again, saying that most surveillance technology isn’t determining, “Am I shagging too many people on the side and lying to my partner?” Your guess is as good as any as to what that’s all about.
Well, thanks for clearing that up, Alex. That was indeed my sole concern.
(The rest of the article is full of indecipherable quotes from Alex, which demonstrates you don’t need to be smart to be rich.)


It’s a win-win with staff layoffs. Businesses that want to lay people off have a convenient scapegoat and AI companies receive undeserved praise.
A win-win for everyone but the employees, of course.


I thought the government just banned any regulation against AI companies. The inconsistency doesn’t surprise me, but the brazenness sure does.


What’s the deal with the “HPE” in some Register articles? It’s apparently the Hewlett-Packard Enterprise logo, but articles about HPE don’t appear to have that logo.
Is The Register affiliated with HPE now?


AI companies are definitely aware of the real risks. It’s the imaginary ones (“what happens if AI becomes sentient and takes over the world?”) that I imagine they’ll put that money towards.
Meanwhile they (intentionally) fail to implement even a simple cutoff switch for a child that’s expressing suicidal ideation. Most people with any programming knowledge could build a decent interception tool. All this talk about guardrails seems almost as fanciful.


This is good writing.
In promoting their developer registration program, Google purports:
Our recent analysis found over 50 times more malware from internet-sideloaded sources than on apps available through Google Play.
We haven’t seen this recent analysis — or any other supporting evidence — but the “50 times” multiple does certainly sound like great cause for distress (even if it is a surprisingly round number). But given the recent news of “224 malicious apps removed from the Google Play Store after ad fraud campaign discovered”, we are left to wonder whether their energies might better be spent assessing and improving their own safeguards rather than casting vague disparagements against the software development communities that thrive outside their walled garden.


The expectation is for the Foundation to use its equity stake in the OpenAI Group to help fund philanthropic work. That will start with a $25 billion commitment to “health and curing diseases” and “AI resiliance” to counteract some of the risks presented by the deployment of AI.
Paying yourself to promote your own product. Promising to fix vague “risks” that make the product sound more powerful than it is, with “fixes” that won’t be measurable.
In other words, Sam is cutting a $25 billion check to himself.


Mighty thoughtful of Jeff Bezos to award money to a project that coincidentally promotes AI, and puts his name in the same sentence as environmentalism.
Meanwhile, Jeff Bezos’ dirty secret is the environmental harm he’s causing, and intentionally covering up, while trying to greenwash it.


Artificial intelligence has been something people have been sounding the alarm about since the 50s. We call it AGI now, since “AI” got ruined by marketers 60 years later.
We won’t get there with transformer models, so what exactly do the people promoting them actually propose? It just makes the Big Tech companies look like they have a better product than they do.


Sam Altman himself compared GPT-5 to the Manhattan Project.
The only difference is it’s clearer to most (but definitely not all) people that he is promoting his product when he does it…


Geoffrey Hinton, retired Google employee and paid AI conference speaker, has nothing bad to say about Google or AI relationship therapy.


Superintelligence — a hypothetical form of AI that surpasses human intelligence — has become a buzzword in the AI race between giants like Meta and OpenAI.
Thank you MSNBC for doing the bare minimum and reminding people that this is hypothetical (read: science fiction)


Way back in 2023, Matrix was the jack of all trades but the master of none. It wanted to replace Discord but the video messaging was not stable enough. It wanted to replace Slack but message searching didn’t really work. It was still struggling to get a decent client and server implementation, and message loading times were a huge pain point.
Fast forward to today, most of the problems are still there. Give it a couple more years to cook.


Your tl;dr appears to be missing some important data. You can have an opinion but please don’t represent it as an accurate summary.
Things you crucially missed:


Can you be more specific?
They could both be right… From a certain point of view.
LeCun says current AI models are a dead end for progress. I think he’s correct.
Zuckerberg appears to believe long term development of alternative models will be a bigger money drain than pushing current ones. I think he’s correct too.
It looks like two guys arguing about which dead end to pursue.