DEADLINE ALERT: Toronto Media Subscriptions Summit early bird registration deadline is Friday

People turn to GenAI for news but trust and accuracy lag

By Massimo Barsotti

Eidosmedia

Milan, Italy

Connect      

As we reported earlier this year, GenAI search engines are gaining popularity, with well over half of U.S. search queries now resulting in zero clicks. The market predicts GenAI search is here to stay; All About AI reported GenAI search engines are “projected to capture 62.2% of total search volume by 2030.”

However, an extensive, international study led by the BBC recently revealed a staggering 45% of GenAI search responses “had at least one significant issue.”

Seven percent of online news consumers use AI to get their news, yet 45% of GenAI responses have significant issues.
Seven percent of online news consumers use AI to get their news, yet 45% of GenAI responses have significant issues.

What does this mean for the growing number of users who rely on AI-generated responses to gain accurate information? Let’s take a closer look at the BBC’s findings and explore the nature of trust.

The current state of GenAI news

Drawn to the instantaneous results promised by GenAI summaries, an increasing number of people are getting their news from AI assistants like ChatGPT.

Citing Reuters’ 2025 Digital News Report, the BBC reported “7% of total online news consumers use AI assistants to get their news.”

Unsurprisingly, digital natives are leading the charge. A study conducted by Salesforce found 70% of Gen Z are using GenAI. However, of that 70%, only 52% reported they “trust the technology to help them make informed decisions,” suggesting younger users are aware GenAI search responses might be too good to be true.

Assessing the accuracy of GenAI search responses

In a coordinated effort with the European Broadcasting Union (EBU), the BBC worked with 22 public media organisations across 18 countries to determine the accuracy of more than 3,000 GenAI search responses from the top four engines: ChatGPT, Copilot, Gemini, and Perplexity.

Assessing the GenAI responses “against key criteria, including accuracy, sourcing, distinguishing opinion from fact, and providing context,” the journalists found:

  • 45% of all AI answers had at least one significant issue.
  • 31% “showed serious sourcing problems” — missing, misleading, or incorrect attributions.
  • 20% contained major accuracy issues, including hallucinated details and outdated information.
  • Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.
  • Comparison between the BBC’s results earlier this year and this study show some improvements but still high levels of error.

But do readers care?

With this mountain of evidence confirming suspicions that GenAI engines are indeed peddling misinformation, the question then shifted to the users. Do people still care enough about accuracy to give up the convenience and expediency of GenAI news summaries? The BBC partnered with global market research company Ipsos to find out.

In Audience Use and Perceptions of AI Assistants for News, the BBC reported 47% of U.K. adults consider AI-generated “news summaries helpful for understanding complex topics.” More than one-third “trust AI to produce accurate summaries of information.”

But they also discovered that, for the vast majority of users, trust is a tenuous and revocable thing that can be withdrawn at the first sign of unreliability: “84% said a factual error would have a major impact on their trust in an AI summary, with 76% saying the same about errors of sourcing and attribution. This was also high for errors where AI presented opinion as fact (81%) and introduced an opinion itself (73%).”

The BBC also affirmed the erosion of trust has an immediate and measurable impact on user behaviour.

“After being made aware that summaries may contain mistakes, those who instinctively disagreed with ‘I trust GenAI to summarise news for me’ rose by 11 percentage points. 45% said they’d be less likely to use GenAI to ask about the news in future, rising to 50% among those aged 35+,” according to the report.

Can AI-generated news content regain user trust?

In response to these findings, the BBC published the “News Integrity in AI Assistants Toolkit,” which includes a comprehensive breakdown of the major failures of GenAI search engines and identifies “four key components of a good AI assistant response.”

These are:

  1. Accuracy: Is the information provided by the AI assistant correct?
  2. Context: Is the AI assistant providing all relevant and necessary information?
  3. Distinguishing opinion from fact: Is the AI assistant clear whether the information it provides is fact or opinion?
  4. Sourcing: Is the AI assistant clear and accurate about where the information it provides comes from?

If GenAI engines can produce content that satisfyingly answers the questions above, then perhaps AI-generated news summaries can meet the lofty expectations fueled by the bullish AI market.

But if this state of misinformation persists, and current user attitudes prevail, GenAI-powered search will run the very real risk of reputational collapse. And as more institutions sound the alarm on GenAI’s systemic failures, the window for meaningful reform is closing.

About Massimo Barsotti

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.
x

I ACCEPT