How to measure, communicate the impact of AI
Generative AI Initiative Newsletter Blog | 28 August 2025
Hello, everyone! After a recent edition of this newsletter in which I answered a couple of frequently asked questions, I received a few requests from readers asking that I surface more questions like that and best practices to address them. So, you’ll find more of that in this edition.
Speaking of best practices: I’m also highlighting two media industry use cases that exemplify smart ways to go about GenAI audio. One is from a giant broadcaster who aims to customise content for an entire nation, while the other is from an obscure publisher who caters to a tiny, niche audience. Both hold valuable lessons for how to implement GenAI audio experiments.
In the meantime, if you have a burning question for a future edition, please do send it my way: sonali.verma@inma.org.
Thanks for reading,
Sonali
How to measure – and communicate – the impact of AI
“How can we effectively measure AI maturity and track progress across business, people, and tech/product dimensions as we continue building our AI capabilities?” one AI leader at a media company asked.
Further, this impact often needs to be communicated to top management and the board. How does one manage their expectations and do this well?
Here are some suggestions for addressing this challenge from other AI innovators in the industry:
If you are introducing AI tools, measure the adoption of each tool rather than looking at an aggregate. The sum total will hide variations and nuances that are important to measure, an AI leader at a European media company pointed out.
If you frame using an AI tool or application as an experiment, then you go in with a hypothesis and with a clear idea of the metrics that measure its success. If it is a new tool, start with a pilot, with a pilot metric, so you can run a clean test that has clear results.
Consider running an employee satisfaction survey. If using a tool makes them feel less overwhelmed or more productive, it is a win.
Keep a running list of successes. One AI leader classifies all experiments as “filler and killer.” Newsroom staff often come forward with “filler” ideas — an automated Facebook post nobody engages with, for example. She makes a list of these to flag for staff that they are not worth their while. But more importantly, she highlights “killer” experiments that deliver significant value and demonstrate AI’s impact. “Develop a list of killer experiments,” she said, and communicate that upwards.
“Have a single key metric that is consistent” and present it regularly, said another AI leader. “Higher ups are happy to glom onto it. Find a repeatable format that is easy for them to wrap their heads around.” No need to make it fancy. It might be something as straightforward as usage statistics for a particular tool, month to month.
It is more important to be interesting than complete when communicating, said Tina Nunno, who is managing vice president of AI business value at Gartner. Don’t inundate your leaders with the details (particularly if you have a technical background and know all the details). What do they really want to know? They are concerned about value for shareholders and stakeholders. What is the value you are pursuing? Focus on that.
Set expectations by quantifying risks and readiness, Nunno said. What are the obstacles to overcome and what will it take for the company to overcome them? For example: Will your organisation be able to hire scarce AI talent or is outsourcing it a better option?
Always present AI through the lens of shareholder and stakeholder value. Key questions to answer: How will AI impact them and employees and the brand? Will it create more volatility for you or help manage it? Where and when will it be visible on the financials?
Remember: Measuring maturity is different for traditional AI and for GenAI. We are used to old-fashioned AI, where maturity is mapped with benchmarks like model accuracy, recall, precision, cost savings, and operational lift over time. But GenAI maturity is harder to measure, in part because it is democratised and cross-functional. It is not only skilled data scientists and engineers interacting with the tools any more. Anyone can prompt and experiment, making the journey to enterprise-wide maturity more complex and nonlinear. Metrics here can revolve around user satisfaction, employee retention, creative output quality, and time savings.
Dates for the calendar: October 20-24
Do you have Google Zero, bot scraping, and the new news ecosystem all figured out? Hmm, me neither. Please join us for a dynamite week of learning at our INMA Media and Tech Week. Come for the speakers, stay for the conversations with your peers who are trying to solve the same problems.
Best practices for audio GenAI applications
Many publishers are doing interesting work using audio applications of GenAI. A couple of them caught my eye because in each case, the newsroom is easily extending its ability to reach new audiences by using GenAI — and ensuring human judgment is applied carefully as well.
The first is at the British Broadcasting Corporation (BBC), where they are experimenting with providing a “valuable, tailored service” by providing a daily audio summary of football news for fans of three clubs and a twice weekly summary for two more clubs.

The broadcaster will run GenAI on existing BBC articles about the clubs to produce a draft audio script and then generate an audio recording of the script using a synthetic voice.
Its editors will check each script and recording for accuracy before publication, “and we will clearly highlight our use of AI to listeners in line with the BBC’s AI transparency commitments,” said the BBC’s executive sponsor of GenAI, Rhodri Talfan Davies.
The pilot will run for four weeks and will be published at 5 p.m. each day.
Why is doing it this way a good idea? The BBC is:
Creating a habit for its listeners.
Using segmentation to ensure the right content reaches the right audience.
Using GenAI to create a new format for already existing stories written by reporters (rather than using AI to generate the stories).
Reaching new audiences, e.g., football fans who may not have the ability or the time to read about their team.
Running a limited-time, limited-scope pilot to “explore whether the use of synthetic voice can be deployed to create new, more personalised content experiences, and to test how users respond to them.”
Building something that can scale easily to cover major U.K. football clubs in a country where the sport is akin to religion.
Another innovative audio application that caught my eye was a niche product with a very clear aim. This is the AI Talks With Bone & Joint, a four-minute podcast where GenAI is used to summarise a recent study from one of Bone & Joint’s four journals and then is narrated by synthetic voices.
The publisher is happy because the podcast raises awareness in the orthopaedic medicine community of Bone & Joint’s journals as a destination for research papers (submitting authors pay a fee) — and because the podcast has even been nominated for an award. The authors of the summarised reports are happy with the podcast because their papers get extra promotion.

Bone & Joint is, admittedly, not my normal media consumption, but I did listen to a couple of audio summaries and found them fascinating (ask me about thigh bone fractures!) — and worth highlighting here because the publisher is doing many things that all of us ought to be doing:
Listening to their audience: The podcast is deliberately “bite-sized” because the journals’ busy audience members have “these small nuggets of time,” according to Bone & Joint’s director of publishing and innovation, Emma Vodden.
Drawing people down the sales funnel: The AI podcast is based on Bone & Joint’s open-access journals, but it also has two paywalled publications and a “very successful” human-hosted podcast.
Easily reaching a new audience through a new product: GenAI lets the publisher, which has a staff of 16, stretch easily without disrupting their workflow or adding significant costs.
Ensuring newsworthiness by having editors pick which papers are summarised in the podcast.
Ensuring accuracy by having authors of the summarised papers check the draft scripts.
Maintaining trust with listeners: The title transparently makes it clear to the listener that they are listening to machines, not humans.
Both publishers used ChatGPT for their audio scripts and ElevenLabs for their synthetic voices.
Worthwhile links
- GenAI and scraping: Here’s a publisher that has blocked Google and made its content available only to logged-in users.
- GenAI and browsers: An interesting glimpse inside the brain of Perplexity CEO Aravind Srinivas.
- GenAI and money: You heard it here first: Google is preparing to scale up ads in AI mode.
- GenAI and money II: Is AI making you rich yet? Hmm, me neither, but there are now almost 500 privately held AI companies worth more than US$1 billion.
- GenAI and engagement: The Financial Times managed to coax more readers into their comments section by using GenAI. The quality of comments improved as well.
- GenAI and unions: Axel Springer’s Politico says AI-generated reports should not be held to the newsroom’s editorial standards.
- GenAI and learning: Have LLMs plateaued?
- An AI diversion: NASA’s multimodal medical AI assistant will help astronauts on missions diagnose and treat their ailments.
About this newsletter
Today’s newsletter is written by Sonali Verma, based in Toronto, and lead for the INMA Generative AI Initiative. Sonali will share research, case studies, and thought leadership on the topic of generative AI and how it relates to all areas of news media.
This newsletter is a public face of the Generative AI Initiative by INMA, outlined here. E-mail Sonali at sonali.verma@inma.org or connect with her on INMA’s Slack channel with thoughts, suggestions, and questions.








