AI now depends on trusted journalism to separate truth from noise
Conference Blog | 28 October 2025
Throughout INMA’s Media Tech & AI Week — from the three-day Silicon Valley and Bay Area study tour to the two-day San Francisco conference — discussion centred on journalism’s evolving role in an AI ecosystem now learning from the very information it produces.
Speakers described a feedback loop between credible reporting and AI: The more accurate and transparent the training data, the more dependable the models. For news publishers, this means their archives and live content have become essential inputs for generative systems — often scraped automatically and sometimes licensed under negotiated terms.
Participants examined what constitutes fair value when journalism fuels model development, how attribution and authority can persist once content is mediated by algorithms, and what civic responsibilities accompany that influence.
The data behind the dialogue
At Cloudflare, executives explained how large-language models regularly crawl news publisher sites without approval, collecting material to train systems that rarely cite or compensate the sources.
To counter this imbalance, Cloudflare created AI Audit (later renamed Crawl Control) and a “pay-per-crawl” beta that allows media companies to see, block, or monetise such activity. They also declared Content Independence Day, making AI-training blocks the default for new domains.
Their message to delegates was that models depend on verified human journalism and that safeguarding those inputs restores integrity to the open Web.
Speakers on the study tour also referred to MCP servers — controlled endpoints through which chatbots and AI agents can request information directly from news publishers. This system gives news organisations first-party data insight while ensuring audiences can trace where responses originate.
Speakers also showcased provenance tools that embed authentication markers into text and visuals, helping AI systems identify genuine content.
For news executives, these examples illustrated that managing data access and provenance has become the new foundation of credibility.
From training to trust
At The Washington Post session, Chief Technology Officer Vineet Khosla described how the company’s long-term AI road map aims to unite engineering excellence with newsroom quality.
The Post has expanded from analytical and recommendation tools to generative systems that assist reporting, yet every deployment is reviewed through editorial standards to maintain accuracy and fairness.

Sara Trohanis, head of strategic partnerships at The Associated Press, reinforced the importance of principled governance. Its partnerships lead explained that AP views its journalism as intellectual property requiring explicit consent and compensation when used for AI training.
This approach, built on lessons from both accepted and declined deals, treats licensing as a safeguard of public-interest reporting as well as a sustainable business model.
Perplexity’s Jessican Chan added a complementary perspective from the technology side, outlining how its answer-engine design highlights source transparency so that users can verify where each answer originates.
Together, these examples showed how trust and commercial logic are converging: When publishers retain control of their data, AI companies gain better training sets, and audiences gain more reliable systems.
Journalism as verification infrastructure
Multiple sessions reframed journalism as a verification layer for digital ecosystems. Microsoft executives discussed the Coalition for Content Provenance and Authenticity, a metadata standard that allows AI models to check the origin and integrity of material.
Later, the creators of RSL (Really Simple Licensing) presented a complementary machine-readable protocol that lets publishers specify usage terms for AI training. Supported by major media companies, RSL converts legal principles into code, simplifying enforcement and reducing negotiation friction.
In the week-ending podcast, INMA’s Product & Tech Initiative Lead Jodie Hopperton and Digital Platform Initiative Lead Robert Whitehead reflected that such technical frameworks essentially translate journalism’s long-standing principles — verification, attribution, and accountability — into signals that machines can interpret.
The human factor in a machine-first world
A recurring tension through the week was the balance between automation and human judgment. At Stanford’s Brown Institute, INMA members were warned that opaque algorithms can unintentionally distort neutrality, shaping what journalism is surfaced to the public.
Media leaders were urged to participate directly in system design and auditing, arguing that ethical AI depends on newsroom values such as transparency and accountability.

That concern re-emerged in the closing podcast, where Hopperton, Whitehead, and Nicki Purcell, who co-moderated the conference, agreed that audiences still rely on established news brands when information stakes are high. They stressed that the challenge ahead is preserving brand trust when discovery flows through AI intermediaries.
Journalism, they noted, must now express its credibility in machine-readable ways — for instance through verified identifiers and contextual labelling that digital agents can display alongside generated summaries.
Literacy as leadership
Speakers repeatedly underlined that AI literacy — both technical and ethical — is now an executive responsibility.
Katharina Neubert, senior vice president/sharing and investments at Holtzbrinck, observed that employees often struggle to use AI effectively because many lack foundational data fluency. She recommended starting with broad digital skills training before moving to advanced automation initiatives, emphasising that cultural readiness is as important as software deployment.

Sessions from Nota and Nexstar illustrated how this literacy translates into practice. Their leaders described workflow automation that enhances productivity yet still depends on editorial oversight. The organisations making the fastest progress were those where management promotes experimentation while maintaining clear boundaries for ethical use.
For INMA members, the message was direct: Develop structured learning programs, share use cases internally, and treat continuous education as a strategic asset.
The economics of quality
Several speakers reframed content economics for the AI era. Bloomberg Beta’s James Cham explained that publisher material should be regarded as high-value intellectual property rather than low-cost input. As open and inexpensive models multiply, he said, the differential between verified journalism and synthetic data will only widen, giving reliable publishers greater pricing power.
Start-ups such as Andi Search and other search and answer-based solutions added that their conversational interfaces rely heavily on credible, attributable information. For these systems, journalism provides the benchmark that keeps generative answers anchored in fact. Without it, model quality erodes — making trust itself a market differentiator.
What publishers should do next
By the close of Media Tech & AI Week, presenters distilled the discussion into practical priorities for media executives:
-
Audit data exposure. Identify which bots and agents access your content, then decide what to block, allow, or monetise.
-
Structure rights metadata. Adopt frameworks such as RSL so usage permissions and compensation terms are machine-readable.
-
Partner on provenance. Collaborate with technology companies on shared authenticity standards that flag verified sources to AI systems.
-
Invest in AI literacy. Train staff to understand both the capabilities and the ethical boundaries of generative tools.
-
Lead public education. Use news brands’ authority to explain how AI works and why credible data is essential to trustworthy answers.
These steps reposition journalism from a passive data source to an active architect of the information ecosystem.
Reclaiming the narrative
The atmosphere throughout INMA’s week in San Francisco was a mix of caution and optimism. In the city that launched many of the technologies now transforming news, publishers made a clear statement of relevance: Without professional journalism, AI cannot separate truth from misinformation.
This marks a new definition of leadership. Journalists are no longer just observers of disruption; they are contributors to the training and evaluation of the systems that will shape global information flows. The public will rely on them not only for facts but also for guidance in understanding how those facts are processed and delivered by machines.
If the first digital revolution required adapting to external platforms, the next demands teaching those platforms to recognise and respect quality. The week’s speakers concluded that journalism’s mission is not resistance to AI but stewardship — ensuring these technologies learn from the most reliable teachers and remember where their lessons began.








