News companies need to talk about AI adoption and scaling
Generative AI Initiative Newsletter Blog | 09 June 2025
As we head into summer in the northern hemisphere, what is on media executives’ minds when it comes to AI?
We explore a couple of key themes in this week’s newsletter — AI scaling and adoption, as well as using AI in a way that creates pride rather than embarrassment.
Sonali
Why we’re talking about AI scaling and adoption
Can we take a moment to talk about an AI-related problem that keeps popping up in conversations with media executives?
It is the problem of scaling and adoption. We are now at the stage where most media companies have run dozens of experiments with AI. But getting employees excited about using the tools is proving to be remarkably challenging.
Indeed, an INMA survey of media executives identified this as one of the top challenges the industry faces over the next 12 months.

Different news organisations are trying different approaches.
For example, Axel Springer’s Business Insider tracks employee AI use and even maintains a leaderboard that names the employees who use it the most. The news brand, which recently announced that it is dismissing about one-fifth of its staff, says about 70% of its employees use ChatGPT and that it is aiming for 100% adoption.
Another example is Thomson Reuters, where AI is used to create news alerts from press releases, package stories, and create company profiles. The newsroom is encouraged to experiment with AI, according to Richard Baum, global general manager of Reuters newsroom operations.
“This is more of a cultural challenge than a tech challenge,” he added. “The tech is doable. Getting people to buy into it is much harder.”
Or as Lars Jensen, who oversees audience insights at Berlingske Media in Denmark, said: “As the technology-driven ‘AI-for-the-sake-of-AI’ hype is ending, factors such as organisational culture and actual implementation become ever more important. The ‘shiny new technology’ part leaves the stage as old giants of ‘change management’ and ‘culture work’ take over.”
Why is this so hard? Fear and insecurity are two big reasons.
According to a survey of more than 7,200 office workers and IT professionals, 46% say the AI tools they use are not employer-provided, and nearly one-third of them say they are secretly using AI instead of revealing it to their employers.
Some are also concerned that revealing they get help from AI may compound the heavy workload they hoped AI would help alleviate, agreeing with the statement: “When I work more efficiently, my employer gives me more work,” according to the survey.
How does one overcome this, then?
Jessica Bulthé, who heads data and insights at Mediahuis in Belgium, has some advice on what works:
Set up an internal hub where “information, inspiration, and guidelines” are posted, along with updates on projects.
Send out a regular newsletter with answers to staff questions, announcements of workshops, and similar content. Mediahuis’ newsletter, sent to nearly 7,000 employees, has an impressive open rate of 63%.
Run training sessions as well as hackathons.
Create tools that adjust their output to a brand-specific way of writing and tone of voice. “This helps with the adoption rate in the newsrooms,” she said.

“The biggest barriers we face aren’t hallucinating LLMs or prompt-engineering challenges,” Bulthé said. “The real barriers are fear, fatigue, and identity. Journalists fear that AI will undermine their craft. They’re tired of yet another wave of ‘the next big thing.’ And they wonder: If a machine can write, what’s my value? And you know what? We need to meet that fear with respect, not arrogance.”
As part of that effort, her team works with journalists to co-create tools that can enhance their work. “Journalists remain in control of every decision,” she said.
“AI can create content but not journalism”
Even as her team works to acclimatise journalists to AI, Bulthé was very clear on one point: “We respect the fact that AI can create content, but not journalism. So, all our tools are amplifying the journalistic content that is written by humans. We do not create tools that can write articles by themselves and, therefore, replace journalists. We go for effectiveness rather than efficiency.”
The pitfalls of letting AI do the writing have been clearly demonstrated in the weeks past.
The Chicago Sun-Times and The Philadelphia Inquirer published a syndicated summer reading list of 15 novels, 10 of which did not exist. A freelancer used AI to write the list and provided it to King Features, a division of Hearst Newspapers.
“Even though it wasn’t our actual work, the Sun-Times became the poster child of ‘What could go wrong with AI?’” Sun-Times’ CEO wrote.
“Did AI play a part in our national embarrassment? Of course. But AI didn’t submit the stories, or send them out to partners, or put them in print. People did. At every step in the process, people made choices to allow this to happen.”
And indeed, it is also people who judiciously use AI tools to create fine journalism, as the winners and finalists of the Pulitzer Prize this year demonstrate.
The Associated Press undertook a three-year investigation involving dozens of reporters and the creation of a database to document more than 1,000 deaths in which police officers subdued victims with methods intended to be non-lethal.
“In hundreds of cases, officers weren’t taught or didn’t follow best safety practices for physical force and weapons, creating a recipe for death,” the AP wrote. The victims were disproportionately Black Americans.
Reporters filed nearly 7,000 requests for death certificates, autopsy reports, and body-camera footage, receiving more than 200,000 pages of documents. No more than one-third of the cases the AP identified are listed in federal mortality data as involving law enforcement at all.
The news agency used optical character recognition to extract text from images of documents to index the causes of death and AI transcription for audio from hundreds of hours of police body camera footage.
Similarly, journalists from the Center for Public Integrity, Reveal, and Mother Jones used a custom image-recognition algorithm to go through 1.8 million handwritten land records to identify more than 1,000 formerly enslaved people who were given land after the U.S. Civil War — and then stripped of it months later after President Abraham Lincoln was assassinated.
The team also undertook genealogical research and created family trees for more than 100 of these freedmen and tracked down their descendants.
The Washington Post used object-detection models to identify military vehicles in satellite imagery because the visual forensic journalism team was examining the Israeli military’s explanation for killing two Al Jazeera journalists in Gaza.
The news brand obtained and reviewed drone footage. “No Israeli soldiers, aircraft, or other military equipment are visible in the footage taken that day — which the Post is publishing in its entirety — raising critical questions about why the journalists were targeted,” the Post said.
Similarly, a geospatial Artificial Intelligence firm called Preligens ran satellite imagery provided by the Post through its AI vehicle detector and did not find any armored vehicles within 9.7 square miles.
The Wall Street Journal used AI to map how Elon Musk’s rhetoric has shifted over time to become increasingly political, particularly after his acquisition of Twitter.

The news brand used AI to analyse more than 41,000 tweets by Musk, going back as far as 2019.
As Bulthé said: “If I can leave you with one thing today, it’s this: GenAI won’t kill journalism … . The journalists who adapt — with integrity, with curiosity, and with fire — they won’t just survive. They will lead.”
Worthwhile links
- GenAI and zero-click search: Dotdash Meredith CEO Neil Vogel on his strategy.
- GenAI and cartoons: How two newspaper cartoonists are experimenting with this technology.
- GenAI and advertising: Meta will fully automate ad production.
- GenAI and search optimisation: Edelman launches generative engine optimisation designed to help brands manage their presence in AI-generated search results.
- GenAI and employment: No need to panic (yet).
- GenAI and employment II: Duolingo replaces people with AI, faces backlash.
- GenAI and employment III: AI could wipe out half of all entry-level white-collar jobs — and spike unemployment to 10% to 20% in the next one to five years, says Anthropic CEO Dario Amodei.
- GenAI and deals: Meta signs a military contract for its extended reality headsets.
- GenAI and deals II: The UAE buys OpenAI premium access for its entire population.
- GenAI and the future: An interview with Google CEO Sundar Pichai.
About this newsletter
Today’s newsletter is written by Sonali Verma, based in Toronto, and lead for the INMA Generative AI Initiative. Sonali will share research, case studies, and thought leadership on the topic of generative AI and how it relates to all areas of news media.
This newsletter is a public face of the Generative AI Initiative by INMA, outlined here. E-mail Sonali at sonali.verma@inma.org or connect with her on INMA’s Slack channel with thoughts, suggestions, and questions.