1. United States
  2. N.Y.
  3. Letter

Congress Must Establish Guardrails on Military AI Before It Is Too Late

To: Sen. Gillibrand, Sen. Schumer, Rep. Jeffries

From: A constituent in Brooklyn, NY

March 9

I am writing to express deep concern about recent reporting regarding the rapid expansion of artificial intelligence in military operations and the troubling push within the federal government to remove guardrails on how these systems may be used in warfare. We are entering a moment where rapidly evolving AI systems could be connected directly to the use of lethal force. That should alarm every member of Congress. Recent disputes involving the AI company Anthropic and the U.S. government have raised disturbing questions about whether the Department of Defense is demanding fewer restrictions on how advanced AI systems can be deployed in military contexts. If the government is seeking to remove safety guardrails designed to prevent misuse of powerful AI models, that represents a dangerous and irresponsible path. Decisions involving the use of deadly force must never be delegated to machines. Human judgment, accountability, and ethical reasoning are essential when lives are at stake. Removing human decision-making from lethal military actions would cross a profound moral and legal boundary and create a dangerous precedent for autonomous warfare. History shows that new weapons technologies often outpace the legal and ethical frameworks needed to govern them. The emergence of artificial intelligence in warfare presents one of the most consequential examples of that pattern. Without clear rules, AI systems could be used to identify targets, select targets, and execute lethal operations with minimal human oversight. Such capabilities raise profound risks: • Loss of meaningful human control over life-and-death decisions • Increased likelihood of accidental or wrongful targeting • Rapid escalation of automated conflict between nations • An international arms race in autonomous weapons Once these systems are normalized, they will be extraordinarily difficult to control. Other nations will follow suit, and the global threshold for violence could drop dramatically. This is not a distant science-fiction scenario. The technology is developing quickly, and the policy framework governing it is dangerously incomplete. Congress must act now to establish clear rules before the technology outruns democratic oversight. I urge Congress to: 1. Prohibit the use of fully autonomous AI systems in decisions involving lethal force. 2. Require meaningful human oversight and final human authorization for any military action involving deadly force. 3. Establish binding guardrails on how AI systems may be integrated into military targeting and battlefield operations. 4. Mandate transparency and reporting requirements regarding the use of AI by the Department of Defense and intelligence agencies. 5. Work with international partners to develop global norms limiting autonomous weapons systems. Artificial intelligence will shape the future of warfare whether we are prepared or not. The question is whether democratic institutions will set ethical boundaries before these technologies are deployed in ways that humanity cannot easily reverse. Congress has a responsibility to ensure that the most powerful technologies ever created are not used recklessly or without accountability. The time to establish safeguards is now — before the world crosses a line that cannot be undone.

Share on BlueskyShare on TwitterShare on FacebookShare on LinkedInShare on WhatsAppShare on TumblrEmail with GmailEmail

Write to Kirsten E. Gillibrand or any of your elected officials

Send your own letter

Resistbot is a chatbot that delivers your texts to your elected officials by email, fax, or postal mail. Tap above to give it a try or learn more here!