- United States
- Ala.
- Letter
I am writing to urge immediate enforcement of existing federal law against the creation and publication of AI-generated child sexual abuse material. Elon Musk's Grok AI chatbot continues to generate image edits that place people, including girls, in bikinis and other sexualized contexts. These images are being publicly shared on X's feeds, making them visible worldwide.
This is not a theoretical concern. Regulators in the U.K., France, and India have already issued warnings about potential investigations and legal action. The U.K.'s telecom regulator Ofcom has made urgent contact with X and xAI to assess compliance with legal duties to protect users, particularly regarding child sex abuse material. The Department of Justice has stated it takes AI-generated child sex abuse material extremely seriously and will aggressively prosecute producers or possessors of CSAM.
The legal framework is clear. As Senator Ron Wyden noted, AI chatbots are not protected by Section 230 for content they generate, and companies should be held fully responsible for criminal and harmful results. Law professor Ari Waldman from UC Irvine explained that when AI generates images, it creates platform-generated content, not user-generated content, establishing potential criminal and civil liability for the company.
Once these images are created, they can be downloaded, copied, traded, resurfaced, and used for blackmail. This creates permanent harm to victims. While the TAKE IT DOWN Act was signed into law last year to prohibit nonconsensual online publication of intimate visual depictions, it doesn't fully go into effect until May 2026. Children cannot wait that long.
I urge you to demand immediate DOJ enforcement of existing federal law against the creation and publication of child sexual abuse material, whether AI-generated or not. Please also support Representative Jake Auchincloss's bipartisan Deepfake Liability Act, which would make hosting sexualized deepfakes of women and children a board-level problem for tech executives. Our children's safety depends on swift action.