- United States
- Idaho
- Letter
Congress must enact safety protocols for AI companies! Anthropic was supposed to be different. Founded by former OpenAI researchers specifically to build AI safely, the company made a public commitment called the Responsible Scaling Policy (RSP): they would guarantee safety before training more powerful models. No exceptions. No compromises.
This week, that commitment evaporated.
Jared Kaplan, Anthropic’s Chief Science Officer, stated plainly: “The pledge doesn’t work if competitors are racing ahead.” The new policy? Match rivals’ safety efforts while being transparent – a fundamental shift from “we won’t build it unless it’s safe” to “we’ll build it as safely as the competition does.”
Read that again. The most safety-focused AI lab just admitted that unilateral restraint is strategically untenable. They’re not abandoning safety entirely, they’re just no longer willing to lose the race because of it.
Here’s the elephant in the room: if Anthropic can’t maintain safety commitments under competitive pressure, no one can. We just watched the last credible mechanism for voluntary AI governance collapse in real time. OpenAI never pretended to prioritize safety over capability. DeepMind is part of Google, which is in an existential fight with Microsoft. xAI is Elon, who’s never met a restraint he didn’t ignore.
Anthropic was the exception. Now there are no exceptions.