• masterspace@lemmy.ca
    link
    fedilink
    English
    arrow-up
    43
    ·
    edit-2
    1 day ago

    Tl;dw: he has two points:

    1. That between cameras and now AI monitoring, it has just drastically reduced the cost of running an authoritarian regime. He claims that running the Stahsi used to cost like 20% of the government budget, but can now be done for next to nothing and if will be harder for governments to resist that temptation.

    2. That there hasn’t been much progress in the world of physics since the 70s, so what happens if you point AI and it’s compute power at the field of physics? It could see wondrous progress and a world of plenty.

    Personally I think point 1 is genuinely interesting and valid, and that point 2 is kind of incredible nonsense. Yes, all other fields are just simplified forms of physics, and physics fundamentally underlies all of them. That doesn’t mean that no new knowledge has come from those fields, and that doesn’t mean that new knowledge in physics automatically improves them. Physics has in many ways, done its job. Obviously there’s still more to learn, but between quantum mechanics and general relativity, we can model most human scale processes in our universe, with incredible precision. The problem is that that the closer we get to understanding the true underlying math of the universe, the harder it is to compute that math for a practical system… at a certain point, it requires a computer on the scale of the universe to compute.

    Most of our practical improvements in the past decade have and will come from chemistry, and biology, and engineering in general, because there is far more room to improve human scale processes by finding shortcuts, and patterns, and designing systems to behave the way we want. AI’s computer scale pattern matching ability will undoubtedly help with that, but I think it’s less likely that it can make any true physics breakthroughs, nor that those breakthroughs would impact daily life that much.

    Again though, I think that point number 1 is incredibly valid. At the end of the day incentives, and specifically cost incentives, drive a massive amount of behaviour. It’s worth thinking about how how AI changes them.

    • egerlach@lemmy.ca
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      3
      ·
      1 day ago

      Ugh, I’m tired of point 2. Yes, LLMs have found a few patterns in large-scale study analyses that humans hadn’t, but they weren’t deep insights and there had been buried hypotheses around them from existing authors, IIRC (too lazy to source).

      • Perspectivist@feddit.uk
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        2
        ·
        edit-2
        1 day ago

        AI is not synonymous with LLM. AlphaFold figured out protein folding. It’s an AI but not an LLM.

        • phaedrus@piefed.world
          link
          fedilink
          English
          arrow-up
          10
          ·
          1 day ago

          100% this, people say they understand AI is a buzzword, but don’t realize just how large of an umbrella that term actually is.

          Enemy NPCs in video games back to the 80’s fall under AI.

          • LEM 1689@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            7
            ·
            1 day ago

            The term AI is actually from the 1950s

            The phrase “artificial intelligence” was coined in 1956 by John McCarthy during a workshop at Dartmouth College, where researchers aimed to explore whether machines could think like humans.

          • Perspectivist@feddit.uk
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 day ago

            When most people hear AI they think AGI and because a narrow-AI language model doesn’t perform the way they expect an AGI to they then say stuff like “it’s not intelligent” or “it’s not an AI”

            AI as a term is about as broad as the term “plants” which contains everything from grass to giant redwoods. LLM is just a subcategory like conifers.

            • phaedrus@piefed.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 day ago

              Exactly. Or, to be more precise to the point of the comment that started this thread:

              Physics is to Chemistry what AI is to LLMs

        • egerlach@lemmy.ca
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 day ago

          I work primarily in “classical” AI and have been working with it on-and-off for just under 30 years now. Programmed my first GAs and ANNs in the 90s. I survived Prolog. I’ve had prolonged battles getting entire corporate departments to use the terms “Machine Learning” and “Artificial Intelligence” correctly, understand what they mean, and how to start thinking about them to incorporate them correctly into their work.

          Thus why I chose the word “LLM” in my response, not “AI”.

          I will admit that I assumed that by “AI” Jimmy Carr was referring to LLMs, as that’s what most people mean these days. I read the TL;DW by @masterspace@lemmy.ca but didn’t watch the original content. If I’m wrong in that assumption and he’s referring to classical AI, not LLMs, I’ll edit my original post.

          • masterspace@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            It’s not entirely clear what he’s referring to, he just uses the term AI broadly in the context of people being worried about job losses, then talks about the reduction in secret police costs that enables, then discusses applying AI to physics.