ZAICORE
Return to Intelligence Feed
Deloitte's AI Citation Scandal: $1.9M in Government Reports With Fabricated Sources
Z
ZAICORE
AI Engineering & Consulting
2025-11-26

Deloitte's AI Citation Scandal: $1.9M in Government Reports With Fabricated Sources

AIConsultingRisk

Two Deloitte reports commissioned by government clients now contain confirmed AI-generated fabrications. The cost: $1.9 million in taxpayer money and a credibility crisis for AI-assisted consulting.

The Canadian Report: $1.6M in Fake Citations

Newfoundland and Labrador paid Deloitte $1,598,485 for a 526-page Health Human Resources Plan. The report was intended to guide policy on nurse and physician recruitment in a province facing persistent healthcare shortages.

An investigation by The Independent found the report cited:

  • Academic papers that don't exist
  • Real researchers on papers they never wrote
  • A fabricated article allegedly published in the Canadian Journal of Respiratory Therapy—which the Canadian Society of Respiratory Therapists confirmed was never published

Gail Tomblin Murphy, an adjunct professor at Dalhousie University's School of Nursing, was among those incorrectly cited. She told investigators the paper attributed to her "does not exist."

Deloitte's response: "AI was not used to write the report; it was selectively used to support a small number of research citations."

The distinction matters less than the outcome. Fabricated sources made it into a government policy document that will shape healthcare decisions.

The Australian Report: Partial Refund Required

Four months earlier, Deloitte produced a $290,000 report for the Australian government on welfare compliance. Dr. Chris Rudge, a Sydney University researcher, identified up to 20 errors in the first version—including fabricated academic references and an invented quote attributed to a federal court judge.

The firm later disclosed it had used Azure OpenAI GPT-4o to fill "traceability and documentation gaps."

Senator Barbara Pocock called for a full refund: "Deloitte misused AI and used it very inappropriately: misquoted a judge, used references that are non-existent."

Deloitte refunded only the final payment installment.

Why This Keeps Happening

The pattern is consistent: generative AI tools produce confident, well-formatted text with citations that look legitimate but don't exist. This is a known behavior called hallucination. LLMs generate statistically plausible outputs, not factually verified ones.

The failure isn't in the AI. It's in the process.

Using GPT-4 to draft citations without verification is like using autocomplete to write legal contracts. The tool does what it's designed to do. The error is treating its output as source material rather than a starting point for human review.

The Cost of Skipping Verification

| Report | Cost | Errors Found | Outcome | |--------|------|--------------|---------| | Newfoundland Health Plan | $1.6M | Multiple fake citations, invented papers | Under review | | Australian Welfare Report | $290K | 20+ errors, fabricated judge quote | Partial refund |

Combined: nearly $1.9 million in reports with fundamental credibility issues.

What Rigorous AI Implementation Looks Like

The problem isn't using AI in research. It's using AI without verification infrastructure.

A defensible process requires:

  1. Source verification — Every citation generated by AI must be independently verified against primary sources
  2. Human review gates — AI-generated content should be flagged for mandatory human review before inclusion
  3. Audit trails — Document which content was AI-assisted and what verification was performed
  4. Domain expertise — Reviewers must have subject matter expertise to catch errors that look plausible but aren't

This adds cost. It also prevents delivering fabricated research to government clients.

The Takeaway

AI can accelerate research, drafting, and analysis. It cannot replace verification. The Deloitte cases demonstrate what happens when organizations deploy AI to save time without investing in the infrastructure to catch its errors.

The $1.9 million spent on these reports is a small number compared to the policy decisions they were meant to inform. A fabricated citation in a healthcare workforce plan or welfare compliance framework doesn't just embarrass a consulting firm—it potentially shapes real policy based on sources that don't exist.

The question for any organization using AI in professional services: do you have a process that catches hallucinations before they reach the client?

Z
ZAICORE
AI Engineering & Consulting
Want to discuss this article or explore how ZAICORE can help your organization? Get in touch →