INMA Webinar explores how newsrooms are harnessing generative AI
Generative AI Initiative Blog | 04 November 2025
In the recent Webinar How to supercharge reporting and streamline workflows with GenAI, INMA members were offered new perspectives on how AI is being ethically and effectively integrated into newsroom operations.
Presented by INMA and OpenAI and hosted by Sonali Verma, lead of the INMA Generative AI Initiative, the 90-minute event explored how GenAI is reshaping journalism, from streamlining workflows to unlocking new forms of storytelling.
The session featured three presentations, with each speaker offering a distinct perspective on how AI is being ethically and effectively integrated into newsroom operations.
Building AI with purpose at The New York Times
Zach Seward, editorial director of AI Initiatives at The New York Times, opened his presentation with a reminder to newsrooms: “Start with why — not AI.” He underscored the importance of identifying real newsroom challenges and exploring whether AI can help solve them rather than chasing technology for its own sake.
The Times’ AI Initiatives team, composed of journalists with backgrounds in machine learning, design, editorial, and product development, focuses on two core missions: uncovering stories hidden in messy datasets and making the Times more accessible to diverse audiences.
One example of accessibility is the Times’ increased output of Spanish-language stories, made possible by AI-assisted translation workflows paired with rigorous human editing. Another is the use of text-to-speech automation, allowing readers to listen to articles.
But AI must always serve the newsroom’s mission, be used with human guidance and review, and operate transparently and ethically.

Seward emphasised that there are certain areas in which AI use is off limits: “To be clear, we do not use AI to write articles at the Times; we have no interest in that,” he said.
He predicted “human-written, verified journalism” will become more valuable as AI-generated copy continues to flood the Web. “The value we provide there is already great, and I think it is likely to go up, so we don’t want to mess with that.”
Instead, the technology is used to support production tasks downstream of publication. A key tool in this effort is Echo, an internal platform that allows journalists to experiment with AI-generated summaries, headlines, and social media copy.
Echo is designed to be simple: journalists enter URLs and prompts, and the system generates drafts that editors review and refine. Teams like SEO and the morning e-mail desk have already incorporated Echo into their workflows, with all AI usage disclosed to readers.
Beyond production, Seward’s team devotes significant resources to AI-assisted reporting. He shared several case studies illustrating how large language models (LLMs) can help journalists sift through massive datasets to find meaningful stories.
In one project, reporters investigating President Trump’s cabinet nominees used semantic search to analyse thousands of hours of television transcripts. AI helped surface quotes revealed patterns of behaviour — such as Pete Hegseth’s references to drinking — without relying on keyword matches.
Another example involved The Times’ coverage of the Israel-Gaza conflict. Reporters wanted to re-interview Gaza residents quoted in past stories. Echo was used to extract names, quotes, and contextual details from hundreds of articles, producing a spreadsheet of 700 individuals. This led to a powerful follow-up piece on how lives had changed over two years.
Seward also shared the success of the “Manosphere Report,” an internal newsletter summarising content from right-wing podcasters and YouTube hosts. AI transcribes and summarises new episodes daily, giving reporters a thorough but digestible overview of that digital subculture.
“It’s way too much for anyone to sit and listen to all at once, but it’s very, very important that we keep tabs on what’s going on in that subculture and … this has been quite effective or helpful,” Seward said.
Throughout his talk, Seward stressed the importance of human oversight: “Never trust output from an LLM,” he warned. “Simply assume it is an unreliable source.”
AI hallucinations —false or misleading outputs — are a known risk, and The Times mitigates them through double-checking, citation tracing, and tools like NotebookLM, which link generated content back to source material.
“I don’t even like [the term] ‘hallucinations’ as a euphemism for lying and for the inaccuracies that absolutely come out of all of these models,” Seward said.
Newsbuddy and the power of practical automation
Carlos Martinez-Rivera, director of data strategy and subscription at GFR Media in Puerto Rico, offered a compelling case study in how a small newsroom can achieve big results with AI.
GFR Media operates two major news brands — El Nuevo Día and Primera Hora —and reaches over 3 million users monthly. With a newsroom of about 100 journalists, the organisation faced a common challenge: Too much time was spent on repetitive tasks, especially when publishing wire stories from agencies like the Associated Press.
That led to the creation of Newsbuddy, an AI system developed in partnership with Axmos and supported by the Google News Initiative. Launched in 2024, Newsbuddy automates the publication of wire stories, reducing processing time from 50 minutes to just five. In September alone, it processed over 580 stories — a 67% increase in productivity.

The system’s workflow is a hybrid of automation and editorial oversight. For wire content, Newsbuddy uses Gemini to generate structure, metadata, and headlines. For GFR’s original Spanish-language stories, it uses DeepL to translate the content into English for its diaspora audience in the United States. Editors select stories, and Newsbuddy handles translation, formatting, tagging, image uploads, and keyword generation.
Crucially, humans stay in the loop: “Every piece still goes through the eyes of a human editor before publication,” he emphasised. “AI help us prepare and optimise the content, but our judgement and verification remains at the centre of the process.”
Martinez-Rivera demonstrated the platform’s interface, showing how editors can select a story, trigger translation, and send it to the CMS (Arc) with all necessary metadata and formatting in place. One of the system’s most impressive features is its ability to learn. Through custom glossaries and iterative use, Newsbuddy improves its handling of gendered language, tone, and local terminology.
However, mistakes still happen; Martinez-Rivera shared an example of an AI hallucination in which Gemini mistakenly identified Kamala Harris as the vice president of Israel.
“This is not something that happens every day, every time, but it is possible. And that’s why every story goes through human review before publication — and that is very, very, very important,” he said.
Looking ahead, GFR Media is expanding Newsbuddy’s capabilities. Upcoming features include automated photo gallery uploads, an AI-powered analytics chatbot integrated with Microsoft Teams, and a transcription tool that generates draft stories from interviews. These innovations aim to free journalists from tedious tasks and allow them to focus on storytelling.
“We think the future of journalism isn’t machines telling stories,” Martinez-Rivera said. “It’s humans telling better stories, powered by AI.”
Unlocking workflows with MCP servers and connectors
The final presentation came from Yiren Lu, a solutions architect at OpenAI, who introduced attendees to MCP servers and connectors — tools that allow organisations to integrate proprietary systems and data into ChatGPT.
Lu described this as a “second ChatGPT moment,” where AI evolves from answering questions to completing entire workflows.
MCP stands for model context protocol, a standardised language that enables LLMs and servers to communicate. Unlike traditional REST APIs, MCP servers are self-documenting and support standardised input and output formats. This allows ChatGPT to discover available endpoints and interact with them dynamically, reducing the need for custom client code.
Lu outlined three tiers of integration. The first involves native connectors to popular tools like Google Drive, Outlook, and SharePoint. These require minimal setup and allow users to query internal documents, e-mails, and calendars directly within ChatGPT: “If you turn these on … then you’ll just have immediate access to them.”

The second tier includes custom connectors built on existing MCP servers. Companies like Databricks, Snowflake, and Bloomberg have already created MCP servers that users can connect to via developer mode in ChatGPT. Lu demonstrated how to browse and activate these connectors through the ChatGPT interface.
The third tier is building your own MCP server. While most users won’t need to do this, Lu explained that organisations with proprietary data or systems can use Python or TypeScript SDKs to expose endpoints. Authentication can be added to control access, and once connected, ChatGPT can perform both read and write actions — such as querying databases or inserting new records.
Lu showcased two examples:
- In one, ChatGPT used an MCP server to visualise refugee statistics from Afghanistan using UN data. The model parsed the query, retrieved data across multiple years, and generated a chart — all within the chat interface.
- In another, ChatGPT connected to a Superbase database to query subscriber information and even insert a new subscriber record using natural language.
We are at a moment, Lu said, where we can “think about AI less as a search engine and more like a coworker that can take charge of whole workflows.”








