

First comment: “the world is bottlenecked by people who just don’t get the simple and obvious fact that we should sort everyone by IQ and decide their future with it”
No, the world is bottlenecked by idiots who treat everything as an optimization problem.
I don’t doubt you could effectively automate script kiddie attacks with Claude code. That’s what the diagram they have seems to show.
The whole bit about “oh no, the user said weird things and bypassed our imaginary guard rails” is another admission that “AI safety” is a complete joke.
there it is.
Does this article imply that Anthropic is monitoring everyone’s Claude code usage to see if they’re doing naughty things? Other agents and models exist so whatever safety bullshit they have is pure theater.