• 34 Posts
  • 160 Comments
Joined 2 years ago
cake
Cake day: March 19th, 2024

help-circle

















  • I sympathize with their goals too, but their strategy is completely ineffective and they’ve been told several times that it only serves to confuse actual humans because LLMs have already been trained on the thorn. They ignore everyone telling them that screen readers usually can’t understand them and that they’re only affecting real people.

    Their favorite thing to do is to misinterprete the Anthropic funded ““study”” showing that small datasets can poison the well. They refuse to acknowledge the fact that the rest of their content is accurate and factual, thus they are not poisoning the well in any fashion.

    Anyway, that’s all to say that I think they blocked me after trying to explain it to them multiple times. That, or they’re just fully ignoring me. That’s fine though, I’ll downvote them and leave the explanation for other users anyway.







  • This might be purely mathematical and algorithmic.

    There’s no might here. It is not conscious. It doesn’t know anything. It doesn’t do anything without user input.

    That ““study”” was released by the creators of Claude, Anthropic. Anthropic, like other LLM companies, get their entire income based on the idea that LLMs are conscious, and can think better than you can. The goal, like with all of their published ““studies””, is to get more VC money and paying users. If you start to think about it that way every time they say something like “the model resorted to blackmail when we threatened to turn it off”, it’s easy to see through their bullshit.