Just want to clarify, this is not my Substack, I’m just sharing this because I found it insightful.

The author describes himself as a “fractional CTO”(no clue what that means, don’t ask me) and advisor. His clients asked him how they could leverage AI. He decided to experience it for himself. From the author(emphasis mine):

I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me. I wanted to experience what my clients were considering—100% AI adoption. I needed to know firsthand why that 95% failure rate exists.

I got the product launched. It worked. I was proud of what I’d created. Then came the moment that validated every concern in that MIT study: I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it.

Now when clients ask me about AI adoption, I can tell them exactly what 100% looks like: it looks like failure. Not immediate failure—that’s the trap. Initial metrics look great. You ship faster. You feel productive. Then three months later, you realize nobody actually understands what you’ve built.

  • Rimu@piefed.social
    link
    fedilink
    English
    arrow-up
    54
    arrow-down
    9
    ·
    edit-2
    2 days ago

    FYI this article is written with a LLM.

    image

    Don’t believe a story just because it confirms your view!

    • prole@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      4
      ·
      19 hours ago

      Lol the irony… You’re doing literally the exact same thing by trusting that site because it confirms your view

        • AmbiguousProps@lemmy.today
          link
          fedilink
          English
          arrow-up
          26
          arrow-down
          1
          ·
          1 day ago

          Sure, but plenty of journalists use the em-dash. That’s where LLMs got it from originally. It alone is not a signature of LLM use in journalistic articles (I’m not calling this CTO guy a journalist, to be clear)

          • /home/pineapplelover@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 hours ago

            When I mean “nobody uses it” I mean nobody other than people getting paid writing for a living would use it. This tech bro would not use that em dash and the quotation marks you can’t also find on the keyboard.

          • JcbAzPx@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            16 hours ago

            Context is everything. In publishing it’s standard; in online forums it’s either needlessly pretentious or AI and either way they deserve to be called out.

        • AmbiguousProps@lemmy.today
          link
          fedilink
          English
          arrow-up
          25
          ·
          edit-2
          2 days ago

          I mean… has anyone other than the company that made the tool said so? Like from a third party? I don’t trust that they’re not just advertising.

          • Rimu@piefed.social
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            7
            ·
            edit-2
            1 day ago

            The answer to that is literally in the first sentence of the body of the article I linked to.

            • DSN9@lemmy.ml
              link
              fedilink
              English
              arrow-up
              15
              ·
              1 day ago

              Ai says Ai correction tool about how crappy Ai is at coding’s article is 99 percent chance of being Ai, results generated by Ai. . .

      • Dethronatus Sapiens sp.@calckey.world
        link
        fedilink
        arrow-up
        22
        arrow-down
        1
        ·
        2 days ago

        @LiveLM@lemmy.zip @rimu@piefed.social

        This!

        Also, the irony: those are AI tools used by anti-AI people who use AI to try and (roughly) determine if a content is AI, by reading the output of an AI. Even worse: as far as I know, they’re paid tools (at least every tool I saw in this regard required subscription), so Anti-AI people pay for an AI in order to (supposedly) detect AI slop. Truly “AI-rony”, pun intended.

          • Dethronatus Sapiens sp.@calckey.world
            link
            fedilink
            arrow-up
            5
            ·
            2 days ago

            @rimu@piefed.social @technology@lemmy.world

            Thanks, didn’t know about that one. It seems interesting (but limited, according to their “Pricing” ; every time a tool has a “pricing” menu item, betcha they’ll either be anything but gratis or extremely limited in their “free tier”), I created an account and I’ll soon try it with some of the occult poetry I use to write. I’m ND so I’m fully aware of how my texts often sound like AI slop.

      • Rimu@piefed.social
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        9
        ·
        2 days ago

        I’ve tested lots and lots of different ones. GPTZero is really good.

        If you read the article again, with a critical perspective, I think it will be obvious.

    • Randelung@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 day ago

      Yes, but also the opposite. Don’t discount a valid point just because it was formulated using an LLM.

      • Rimu@piefed.social
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        1 day ago

        The story was invented so people would subscribe to his substack, which exists to promote his company.

        We’re being manipulated into sharing made-up rage-bait in order to put money in his pocket.