AI integration soars, but audiences distrust media disclosing usage
Content Strategies Blog | 16 October 2025
Two recent events spotlighted a serious dilemma facing established media and news brands: While the media and public are embracing AI in life and work, audiences express worsening distaste in the use of AI to create the news.
The expansive and educational INMA Media Innovation Week in Dublin showed the alacrity with which the industry is adopting the tech to its benefit. Stage talks and case studies evidenced how common it already is for AI to feature somewhere in the creation chain of content and products.
This is far beyond the once-feared newsroom use of large-language models (LLMs) to churn out content on a topic, which has never really been a thing in serious media for obvious reasons around trust and law.
Journalists spoke about how they routinely use AI tools to remove tedium and legwork, and to help analyse data sets, identify patterns, and turn bland numbers into compelling stories at a pace yesterday’s researchers and assistant reporters would disbelieve.
Product teams have AI tools that can quickly recompose existing content for new audiences and formats, while subscriptions and commercial teams use AI to better match existing content with audiences and unlock new ad strategies.
Forget the making of content by AI, and look instead at how it is making content viable.
But what do people think of it?
The public view: “Do as we say, not as we do!”
The general public mirrors the industry in embracing AI day to day.
Just after INMA Dublin concluded, a Reuters Institute report confirmed surging public use of generative AI tools. Those using the tech have increased a mammoth 52.5% in a year, and those who used it at least weekly increased even more to 88%.
So, the public is sold on AI, right? Err … no.
The same Reuters Institute stats highlighted a huge problem media brands face: While they and audiences are au fait with the tech, public acceptability of its use in the news-making process is actually going down.

Only 12% believe using AI to fully create news is OK; this is down from 14% a year ago. The figure of those who accept mostly using AI to create news — but with some human oversight — was flat, at 21%. Given the ballooning usage of AI in the real world, those figures are a boat anchor and a problem.
The unsettling conclusion is that, the more the public uses AI, the less they trust it. If AI fakery in the relatively low-stakes environment of Facebook feeds and YouTube ads is annoying, the idea of it being used in high-stakes news is not an upsell.
Worse, those who are most cynical of AI seem already to be convinced it is being used more widely in newsrooms to generate content than it actually is.
At first glance, this suggests there is much to be gained by highlighting to audiences exactly where and what AI is being used for in the content lifecycle.
But, there’s more bad news: Recent research by Trusting News into AI usage disclosures showed the instant a site mentions AI usage next to a piece of content, people trust the content less. Even if the disclosure was that no AI was used at all, trust dropped!
However you cut it, the simple mention of AI diminishes trust.
Even when 94% of people say they want journalists to disclose their use of AI, they don’t reward honesty with an increase in trust.
That’s a disturbing reality, not made more palatable by other Reuters Institute stats which show the number of people who actually regularly see AI disclosures about AI usage in news content is just 19%. If it became a legal requirement or an industry-driven mandate to outline AI usage, the omens are not great.
Damned if you do, or don’t
So, what do we do? In this case, I struggle to believe this is a challenge the industry can solve by itself.
For a start, different national blocs will have different legal requirements for indicating AI usage — or none at all. There is no one-size-fits-all approach here.
Secondly, we should be realistic that, for every media brand wanting to go the extra yard in highlighting its use of AI, plenty more will say nothing at all. This is particularly true when there only seems to be a negative reaction to it. While some posit voluntary standards (a mooted outcome within the European Union’s AI Act), it seems unlikely most sites will flag AI usage until it is law.
To me, there seem to be two critical factors at play: The first is how good your relationship is with your audience, so you can micro-manage the exact way you tell them of any AI usage (should you choose to do that).
The second is beyond us all in any single industry and is arguably a broader societal need: If the AI companies are to avoid killing their golden goose before it’s started laying profitable eggs, they need to normalise AI declarations, stop courting controversy by promoting the outputs of stolen work, and stop pumping AI slop as a means to benefit their own short-term goals of market share or ad revenue.
Until then, trustworthy media is in a tricky spot, no matter what it tries.








