Deloitte report errors spark AI citation concerns in Canada

Mounting scrutiny of public policy research is intensifying after deloitte report errors were uncovered in a high-profile Canadian healthcare review.

How did AI-linked issues surface in the Newfoundland and Labrador healthcare review?

A Canadian government-commissioned healthcare analysis by Deloitte, costing nearly $1.6 million, contains apparent AI-generated inaccuracies, according to an investigation by The Independent. The 526-page document, released by Newfoundland and Labrador’s government in May, represents one of the most expensive recent consulting contracts in the province’s health sector.

The report was prepared for the then Liberal-led Department of Health and Community Services. It examined virtual care, retention incentives, and the impact of the COVID-19 pandemic on healthcare workers, at a time when Newfoundland and Labrador faces serious nurse and doctor shortages. However, the subsequent media review has raised questions about the reliability of the evidence underpinning those recommendations.

The Independent, a progressive outlet focused on Canada’s easternmost province, found multiple potential inaccuracies and anomalies. Moreover, its investigation suggests some of the underlying research citations may have been produced or distorted with the assistance of artificial intelligence tools, even though the main narrative was not machine-written.

What kinds of canada healthcare report errors did investigators uncover?

According to The Independent, the Deloitte report included false academic citations that referenced made-up scholarly papers. Those fictional sources were used to support cost-effectiveness analyses, a critical component in shaping healthcare spending decisions. The report also misattributed real researchers to studies they had never worked on, creating the appearance of robust evidence where none existed.

Some citations went further, describing papers allegedly co-authored by researchers who said they had never collaborated. That said, the review did not claim that every reference was flawed. Instead, the concern centers on a pattern of citation problems that could undermine confidence in the report‘s conclusions about staffing, virtual care, and system reform.

The report also cited an article supposedly published in the Canadian Journal of Respiratory Therapy. However, investigators were unable to locate the paper in the journal’s database, deepening fears that generative tools may have invented plausible-sounding but nonexistent sources.

How has Deloitte responded to allegations of AI generated citations?

In a statement to Fortune, a Deloitte Canada spokesperson defended the substance of the work. “Deloitte Canada firmly stands behind the recommendations put forward in our report,” the spokesperson said. “We are revising the report to make a small number of citation corrections, which do not impact the report findings.”

The spokesperson added that artificial intelligence did not produce the written report itself. Instead, they said, AI was “selectively used to support a small number of research citations.” However, given the scale of the healthcare study and the financial stakes, critics argue that even limited reliance on machine-generated references demands much stricter verification and transparency.

Moreover, the firm’s position that the citation fixes do not affect the report’s conclusions has drawn scrutiny. Some academics and policymakers question how fabricated or misattributed research can be corrected without re-evaluating any downstream cost-effectiveness models or workforce projections.

What do affected researchers say about the consulting firm fact checking process?

Gail Tomblin Murphy, an adjunct professor in the School of Nursing at Dalhousie University in Nova Scotia, was among those incorrectly cited. She told The Independent that Deloitte had referenced her in an academic paper that “does not exist.” She noted she had only ever worked with three of the six other authors attributed in the false citation, not the full group described.

“It sounds like if you’re coming up with things like this, they may be pretty heavily using AI to generate work,” Tomblin Murphy said. Her comments highlight growing unease in the academic community about how generative tools can fabricate convincing but inaccurate bibliographies, especially when consultants do not rigorously verify each reference.

She further warned that reports guiding public policy must be supported by validated, high-quality evidence. Moreover, Tomblin Murphy stressed that governments and the public pay significant sums for such work, so it must be “accurate and evidence-informed and helpful to move things forward.” Her critique underscores a perceived breakdown in due diligence rather than a single technical error.

How much did the government commissioned report cost, and what is the political fallout?

According to an access to information request published in a blogpost last Wednesday, the Canadian government paid just under $1.6 million for the Deloitte study, issued in eight installments. As of Monday, the report still remained available on the Newfoundland and Labrador government website, despite the emerging controversy over its references and methodology.

Political leadership in the province has changed since the report was delivered. Tony Wakeham, leader of the Progressive Conservative Party in Newfoundland and Labrador, was sworn in as the province’s new premier in late October. However, neither the premier’s office nor the Department of Health and Community Services responded immediately to Fortune’s questions about the May report, and they have not publicly addressed the concerns to date.

That silence leaves open questions about whether the report’s recommendations will continue to guide health policy. It also raises the prospect of further scrutiny by provincial legislators or federal oversight bodies into how consulting firms’ research is vetted before influencing core public services.

How do these deloitte report errors compare with the Australian welfare case?

The Canadian revelations follow similar issues in Australia. In July, Deloitte produced a $290,000 report to help the Australian government tighten welfare compliance. The 237-page study also relied on generative technology and was later found to contain “hallucinations,” including references to non-existent academic research and a fabricated quote from a federal court judgment.

After a researcher flagged the problems, Deloitte issued a revised version of the Australian study. The updated report, quietly uploaded last month to the government’s website, acknowledged that the firm had used the generative language system Azure OpenAI to help create the initial document. That admission came only after outside scrutiny exposed the flawed citations.

In the updated Australian report, Deloitte wrote that “the updates made in no way impact or affect the substantive content, findings and recommendations in the report.” However, critics argued that fabricated sources inherently cast doubt on the integrity of any evidence-based recommendations, not just cosmetic details. Moreover, this second episode has intensified the debate over deloitte ai hallucinations and the robustness of the firm’s internal fact-checking.

What financial consequences has Deloitte faced, and what remains unresolved for Canada?

As part of the Australian case, Deloitte’s local member firm was required to provide the federal government with a partial refund for the flawed welfare report. That financial penalty signaled that officials considered the AI-related inaccuracies serious enough to warrant compensation.

In contrast, no information has yet been made public about any potential refund or contractual remedy related to Canada’s healthcare report. That said, pressure may grow as policymakers, healthcare workers, and taxpayers ask whether they received full value for a study that now needs post-publication corrections to its evidence base.

More broadly, both episodes highlight rising risks when governments rely on large consultancies using generative technology without stringent safeguards. They also suggest that public institutions will need stronger standards for verifying sources, especially in long, complex policy reports that can shape critical services for years.

In summary, the unfolding scrutiny of Deloitte’s Canadian healthcare review and Australia’s welfare study underscores the urgent need for reliable evidence, transparent methodology, and robust oversight whenever AI-assisted research informs public policy decisions.

Source: https://en.cryptonomist.ch/2025/11/25/deloitte-report-errors-ai/