• 1 Post
  • 22 Comments
Joined 6 months ago
cake
Cake day: June 23rd, 2025

help-circle
  • They could both be right… From a certain point of view.

    Within FAIR, LeCun has instead focused on developing world models that can truly plan and reason. Over the past year, though, Meta’s AI research groups have seen growing tension and mass layoffs as Zuckerberg has shifted the company’s AI strategy away from long-term research and toward the rapid deployment of commercial products.

    LeCun says current AI models are a dead end for progress. I think he’s correct.

    Zuckerberg appears to believe long term development of alternative models will be a bigger money drain than pushing current ones. I think he’s correct too.

    It looks like two guys arguing about which dead end to pursue.






  • Alex Karp thinks people only care about one kind of surveillance. And he thinks he will alleviate our fears if he gives us a pinky promise not to surveil us in that one way.

    That way is cheating.

    He later brings this up again, saying that most surveillance technology isn’t determining, “Am I shagging too many people on the side and lying to my partner?” Your guess is as good as any as to what that’s all about.

    Well, thanks for clearing that up, Alex. That was indeed my sole concern.

    (The rest of the article is full of indecipherable quotes from Alex, which demonstrates you don’t need to be smart to be rich.)






  • This is good writing.

    In promoting their developer registration program, Google purports:

    Our recent analysis found over 50 times more malware from internet-sideloaded sources than on apps available through Google Play.

    We haven’t seen this recent analysis — or any other supporting evidence — but the “50 times” multiple does certainly sound like great cause for distress (even if it is a surprisingly round number). But given the recent news of “224 malicious apps removed from the Google Play Store after ad fraud campaign discovered”, we are left to wonder whether their energies might better be spent assessing and improving their own safeguards rather than casting vague disparagements against the software development communities that thrive outside their walled garden.


  • The expectation is for the Foundation to use its equity stake in the OpenAI Group to help fund philanthropic work. That will start with a $25 billion commitment to “health and curing diseases” and “AI resiliance” to counteract some of the risks presented by the deployment of AI.

    Paying yourself to promote your own product. Promising to fix vague “risks” that make the product sound more powerful than it is, with “fixes” that won’t be measurable.

    In other words, Sam is cutting a $25 billion check to himself.







  • Way back in 2023, Matrix was the jack of all trades but the master of none. It wanted to replace Discord but the video messaging was not stable enough. It wanted to replace Slack but message searching didn’t really work. It was still struggling to get a decent client and server implementation, and message loading times were a huge pain point.

    Fast forward to today, most of the problems are still there. Give it a couple more years to cook.


  • Your tl;dr appears to be missing some important data. You can have an opinion but please don’t represent it as an accurate summary.

    Things you crucially missed:

    • Less open than every other service available
    • Bills itself as the most open
    • Server side source code is MIA
    • No model card available. Evaluations, risks, biases, guardrails and safety measures unclear.