Pentagon Pressures AI Firm to Drop Safeguards Congress Must Intervene
834 so far! Help us get to 1,000 signers!
Recent reporting from The Guardian, NPR, Reuters, and the Associated Press describes a troubling dispute between the Department of Defense and the AI company Anthropic.
According to those reports, the Pentagon demanded that Anthropic remove contractual safeguards restricting how its AI model could be used by the military — including limitations related to fully autonomous weapons and mass domestic surveillance. Anthropic CEO Dario Amodei publicly declined to remove those guardrails, stating the company could not “in good conscience” agree to unrestricted deployment of its systems in those contexts.
The administration subsequently directed federal agencies to halt use of Anthropic’s technology and reportedly designated the company a “supply-chain risk.”
This dispute is not about weakening national defense. It is about responsible governance and congressional oversight.
Artificial intelligence systems remain probabilistic tools. As AI industry leaders themselves have acknowledged, frontier models are not reliable enough to operate fully autonomous weapons without meaningful human oversight. These systems can misidentify targets, hallucinate outputs, and fail outside their training data. Integrating them into lethal systems without enforceable guardrails introduces unacceptable risk.
The constitutional implications are equally serious. The Fourth Amendment protects Americans from unreasonable searches and seizures. AI systems capable of analyzing surveillance data at unprecedented scale dramatically expand government technical capacity. If the Department of Defense has no intention of using AI for unlawful domestic surveillance, then codifying clear contractual prohibitions should not be controversial. Refusal to provide such assurances warrants oversight.
Congress has both the authority and responsibility to ensure that federal procurement power is not used to pressure private companies into abandoning publicly stated safety standards.
I respectfully urge you to:
• Conduct oversight hearings into the administration’s handling of AI procurement and safety requirements.
• Require enforceable human-in-the-loop standards for any AI integrated into weapons systems.
• Prohibit federal contract terms that compel companies to remove AI safety guardrails.
• Establish bipartisan, transparent rules governing military AI deployment before such systems become normalized.
Decisions made now will shape the future of warfare, civil liberties, and democratic accountability. Human beings — not algorithms — must remain responsible for life-and-death decisions.