1. United States
  2. Idaho
  3. Letter

Congress must enact safety protocols for AI companies!

To: Sen. Crapo, Sen. Risch, Rep. Simpson

From: A constituent in Boise, ID

March 6

Congress must enact safety protocols for AI companies! Anthropic was supposed to be different. Founded by former OpenAI researchers specifically to build AI safely, the company made a public commitment called the Responsible Scaling Policy (RSP): they would guarantee safety before training more powerful models. No exceptions. No compromises. This week, that commitment evaporated. Jared Kaplan, Anthropic’s Chief Science Officer, stated plainly: “The pledge doesn’t work if competitors are racing ahead.” The new policy? Match rivals’ safety efforts while being transparent – a fundamental shift from “we won’t build it unless it’s safe” to “we’ll build it as safely as the competition does.” Read that again. The most safety-focused AI lab just admitted that unilateral restraint is strategically untenable. They’re not abandoning safety entirely, they’re just no longer willing to lose the race because of it. Here’s the elephant in the room: if Anthropic can’t maintain safety commitments under competitive pressure, no one can. We just watched the last credible mechanism for voluntary AI governance collapse in real time. OpenAI never pretended to prioritize safety over capability. DeepMind is part of Google, which is in an existential fight with Microsoft. xAI is Elon, who’s never met a restraint he didn’t ignore. Anthropic was the exception. Now there are no exceptions.

Share on BlueskyShare on TwitterShare on FacebookShare on LinkedInShare on WhatsAppShare on TumblrEmail with GmailEmail

Write to Michael D. Crapo or any of your elected officials

Send your own letter

Resistbot is a chatbot that delivers your texts to your elected officials by email, fax, or postal mail. Tap above to give it a try or learn more here!