- United States
- Ind.
- Letter
I am writing to oppose any federal legislation that grants AI companies immunity from liability when their products contribute to the death of a human being.
The White House National Policy Framework released in March 2026 recommends that Congress restrict states from imposing liability on AI developers for unlawful conduct carried out by third parties using their systems.
OpenAI has gone further, actively lobbying in support of Illinois SB 3444 — legislation that would shield AI companies from lawsuits even in cases involving mass casualties.
This is the industry telling you, plainly, what it wants: the freedom to profit from products that can cost lives, without legal consequence.
That is not innovation. That is impunity.
Two scenarios demand clear accountability with no carve-out:
First, when a person acts on AI-generated advice and causes death — whether through a mental health crisis, a medical decision, or violence — the company whose system generated that advice cannot be absolved simply because a human was the final actor. Families have already filed lawsuits alleging ChatGPT contributed to suicides.
The Florida Attorney General opened an investigation after a mass shooting at Florida State University allegedly involved AI use. The chain of causation is not theoretical.
Second, when AI is directly integrated into systems that make autonomous decisions affecting human life — medical devices, autonomous vehicles, critical infrastructure, clinical decision support — there is no third party to hide behind.
The product made the decision. The company that built and deployed it bears responsibility.
The argument that liability will stifle innovation is the same argument made by every industry that has ever sought to externalize the cost of its failures onto the public. We rejected it for pharmaceuticals, for aviation, for medical devices. We should reject it here.
I urge you to oppose any federal framework that includes liability shields for AI developers in cases involving death or serious bodily harm, and to ensure that existing product liability and negligence standards apply fully to AI systems deployed in high-stakes environments.
Thank you
A concerned constituent and voter