AI Assistants Get The News Wrong Nearly Half The Time, Say Researchers

All the major AI assistants are routinely misrepresenting news content, right across languages and territories, according to the biggest-ever research study on the topic.

According to the Reuters Institute’s Digital News Report 2025, 7% of online news consumers use AI assistants to get their news, rising to 15% of under-25s.

But according to a new international study, coordinated by the European Broadcasting Union (EBU) and led by the BBC, all the major AI assistants are misleading their readers on a regular basis.

The researchers evaluated more than 3,000 responses from ChatGPT, Copilot, Gemini, and Perplexity against key criteria, including accuracy, sourcing, distinguishing opinion from fact, and providing context.

And, they found, 45% of all AI answers had at least one significant issue. More than three in ten showed serious sourcing problems—missing, misleading, or incorrect attributions—while one in five contained major accuracy issues, including hallucinated details and outdated information. Fourteen percent failed to provide sufficient context.

Gemini was the worst performer, with significant issues in 76% of responses—more than double the number for the other assistants. This was mostly because of its poor sourcing performance, such as misattributing claims—particularly worrisome when the claim is actually incorrect.

“We’re excited about AI and how it can help us bring even more value to audiences. But people must be able to trust what they read, watch and see,” said Peter Archer, BBC programme director, generative AI.

“Despite some improvements, it’s clear that there are still significant issues with these assistants. We want these tools to succeed and are open to working with AI companies to deliver for audiences and wider society.”

Assistants, though, are perfectly happy to answer questions,
whether or not they can give a high-quality answer. Across the entire dataset of 3,113 core and custom questions asked,
only 17, or 0.5%, were met with refusal – even fewer than the 3% in a previous BBC survey in February.

Worryingly, in a separate BBC report, researchers found that just over a third of UK adults say they completely trust AI to produce accurate summaries of information, rising to almost half of under 35s.

“These findings raise major concerns. Many people assume AI summaries of news content are accurate, when they are not; and when they see errors, they blame news providers as well as AI developers—even if those mistakes are a product of the AI assistant,” the researchers said.

The research team’s released a News Integrity in AI Assistants Toolkit, aiming to help develop solutions to the issues uncovered in the report. Meanwhile, the EBU and its members are pushing EU and national regulators to be a bit stricter in enforcing existing laws on information integrity, digital services, and media pluralism.

They stress that ongoing independent monitoring of AI assistants is essential, given the fast pace of AI development.

“This research conclusively shows that these failings are not isolated incidents. They are systemic, cross-border, and multilingual, and we believe this endangers public trust,” said EBU media director and deputy director general Jean Philip De Tender.

“When people don’t know what to trust, they end up trusting nothing at all, and that can deter democratic participation.”

Source: https://www.forbes.com/sites/emmawoollacott/2025/10/22/ai-assistants-get-the-news-wrong-nearly-half-the-time-say-researchers/