It stands to reason that if you have access to an LLM’s training data, you can influence what’s coming out the other end of the inscrutable AI’s network. The obvious guess is that…
I don’t know that it’s wise to trust what anthropic says about their own product. AI boosters tend to have an “all news is good news” approach to hype generation.
Anthropic have recently been pushing out a number of headline grabbing negative/caution/warning stories. Like claiming that AI models blackmail people when threatened with shutdown. I’m skeptical.
They’ve been doing it since the start. OAI was fear mongering about how dangerous gpt2 was initially as an excuse to avoid releasing the weights, while simultaneously working on much larger models with the intent to commercialize. The whole “our model is so good even we’re scared of it” shtick has always been marketing or an excuse to keep secrets.
Even now they continue to use this tactic while actively suppressing their own research showing real social, environmental and economic harms.
I don’t know that it’s wise to trust what anthropic says about their own product. AI boosters tend to have an “all news is good news” approach to hype generation.
Anthropic have recently been pushing out a number of headline grabbing negative/caution/warning stories. Like claiming that AI models blackmail people when threatened with shutdown. I’m skeptical.
They’ve been doing it since the start. OAI was fear mongering about how dangerous gpt2 was initially as an excuse to avoid releasing the weights, while simultaneously working on much larger models with the intent to commercialize. The whole “our model is so good even we’re scared of it” shtick has always been marketing or an excuse to keep secrets.
Even now they continue to use this tactic while actively suppressing their own research showing real social, environmental and economic harms.