
AI Fraud Deterrence Act: Up to 30 Years for AI-Assisted Financial Crimes
On November 25, Rep. Ted Lieu (D-CA) and Rep. Neal Dunn (R-FL) introduced the AI Fraud Deterrence Act. The bipartisan bill dramatically increases penalties for fraud committed using artificial intelligence.
The Penalties
| Crime | Current Maximum | Under AI Fraud Deterrence Act | |-------|-----------------|-------------------------------| | AI-assisted bank fraud | Varies | $2M fine, 30 years prison | | AI-assisted wire fraud | 20 years | $1M fine, 20 years prison | | AI-assisted mail fraud | 20 years | $1M fine, 20 years prison | | AI-assisted money laundering | 20 years | $1M fine, 20 years prison | | AI impersonation of federal officials | N/A | $1M fine, 3 years prison |
The bill specifically targets crimes where AI tools—including deepfakes, voice cloning, and text generation—are used to deceive victims.
Why Now
Two incidents accelerated this legislation:
May 2025: Federal authorities investigated fraudulent calls and texts sent to senators, governors, and business leaders impersonating White House Chief of Staff Susie Wiles' voice and phone number.
July 2025: The State Department warned diplomats that someone was impersonating Secretary of State Marco Rubio via voicemails, texts, and Signal messages.
These weren't theoretical threats. They were active operations targeting high-value individuals using readily available AI tools.
What the Bill Covers
The legislation adopts the AI definition from the 2020 National AI Initiative Act. This includes:
- Generative AI systems (text, image, audio, video)
- Voice cloning and synthesis
- Deepfake video generation
- Any AI system used to create deceptive content for fraud
The bill includes First Amendment protections for satire, parody, and clearly disclosed expressive content.
The Enforcement Gap
Current fraud statutes weren't written for AI-generated content. Prosecutors can charge wire fraud or identity theft, but sentences don't reflect the scale and sophistication that AI enables.
A single bad actor with consumer-grade AI tools can now:
- Clone anyone's voice from a few minutes of audio
- Generate convincing video of public figures
- Produce personalized phishing content at scale
- Automate social engineering attacks
The AI Fraud Deterrence Act attempts to make penalties match this new threat landscape.
Implications for Businesses
The bill targets criminal fraud, not legitimate business use of AI. However, it signals where regulatory attention is heading.
Organizations deploying AI should consider:
Authentication systems — As AI makes impersonation trivial, identity verification becomes critical infrastructure. Voice-based authentication is increasingly vulnerable.
Deepfake detection — Financial services, legal, and healthcare organizations handling sensitive transactions need detection capabilities for AI-generated content.
Employee training — Staff need to recognize AI-generated phishing attempts, which are now indistinguishable from human-written communications in most cases.
Audit trails — Document legitimate AI use clearly. As enforcement increases, being able to demonstrate responsible deployment matters.
What Happens Next
The bill moves to committee review. Given bipartisan sponsorship and the high-profile incidents that prompted it, some version is likely to advance.
For businesses: the regulatory environment for AI is tightening. The question isn't whether AI fraud legislation passes—it's how comprehensive it will be.
The AI Fraud Deterrence Act focuses on criminal use. Future legislation may address negligent deployment, liability for AI-generated harms, and mandatory disclosure requirements.
Organizations building AI capabilities now should design systems with compliance infrastructure from the start. Retrofitting is always more expensive.