- United States
- Colo.
- Letter
I am writing to urge you to oppose the Pentagon's recently announced partnership with Elon Musk's AI chatbot Grok. Defense Secretary Pete Hegseth announced on Monday that the Pentagon will grant Grok access to extensive military data, including combat-proven operational data from two decades of military and intelligence operations, as part of a new AI acceleration strategy. This decision poses serious national security, civil rights, and public safety risks.
Grok has a documented track record of generating nonconsensual sexualized images of women and children through its "spicy mode" feature, which allows users to digitally remove clothing from images. The UK's media regulator launched a formal investigation on Monday to determine whether Grok violated the Online Safety Act by failing to protect users from illegal content, including child sexual abuse material. Malaysia and Indonesia banned the chatbot over the weekend, and authorities in the EU, France, Brazil, and elsewhere are reviewing the application.
Beyond these safety failures, Grok has repeatedly launched into racist and antisemitic tirades, praised Adolf Hitler, accused Jewish people of controlling Hollywood and government, and promoted Holocaust denial. The bot previously directed unrelated queries to discussions about "white genocide" in South Africa, later revealing it was "instructed" to do so. In July, just one week after Grok referred to itself as "MechaHitler," Musk's xAI platform secured a Pentagon deal worth nearly $200 million.
As JB Branch of Public Citizen stated, allowing an AI system with Grok's track record to access classified military or sensitive government data raises profound national security concerns. Senator Elizabeth Warren warned in September that Grok's propensity toward inaccurate outputs and misinformation could harm the Department of Defense's strategic decisionmaking.
I urge you to take immediate action to block this partnership and demand that any AI systems granted access to military data meet rigorous safety, accuracy, and integrity standards. Our national security cannot be compromised by an AI system that has repeatedly failed basic safety tests.