Ippen Digital shares the why behind its AI-driven newsroom
Generative AI Initiative Blog | 18 February 2024
It’s not often that one meets an editor-in-chief who publicly declares that his aim is an AI-driven newsroom. These are not words that usually fit in the same sentence together. Data-driven, sure. But AI-driven?
Meet Markus Knall, editor-in-chief at Munich-based Ippen Digital. He likes the idea of having different versions of the same article for different audiences.
“I think versioning is one of the big strategic fields in AI for newsrooms,” Knall told me. “We have 50 different brands. In the perfect world, we would write the same story in 50 different tones — in brand-specific styles.
“Versioning is huge for us because our network is so huge. That’s what LLMs are good for.”
Knall also wants to try different versions of an article for Facebook, Twitter, Instagram, Google, a newsletter, and the app. He estimates they have up to 10 versions of an article right now, and this applies to both wire copy as well as enterprise journalism.
At the moment, Ippen offers readers the option to summarise articles in bullet points or as a longer precis. If you are in their app, a little magic wand appears that lets you choose your summarisation option.
“We want readers to choose the best option,” said Alessandro Alviani, who heads Ippen Digital’s 10-person, cross-functional AI team building these tools. “We want to make sure we can reach everyone.”
Knall wants to experiment with reporters using AI to write: “We try to imitate what an author would sound like even if he just uses AI to complete his text.”
Ippen is also working on English versions of its articles, has created a German transcription and summarisation tool to help its newsroom, and is building a chatbot that will pop up on articles and allow the reader to seek further information on the topics mentioned.
For images, AI is used in two different ways, Alviani said: “We are using AI to extract the most important information in an article and make sure that we have the opportunity to ping the picture databases from AFP or DPA, etc., to suggest to editors the pictures that fit the article best with a limited number of suggestions. Manual searching is very time consuming.”
Editors are also able to generate pictures with a couple of keywords. Ippen has clear editorial guidelines around this, Alviani said.
For its videos, ChatGPT is writing scripts, which are combined with a synthetic voice. Ippen now produces four times more video content than it did a year earlier with the same number of people.
There is another important project under way, Alviani said: “Fine-tuning our own LLMs using open source models. The idea is to become less and less dependent on OpenAI. It is really important to us. We see there is huge potential impact.”
The team is also creating additional tools to support editors and reduce the risk of hallucinations. “We have a project to determine thresholds for a tool to evaluate accuracy — the LLM will reprompt itself if results are below the threshold to reduce hallucination risks,” Alviani said.
If you’d like to subscribe to my bi-weekly newsletter, INMA members can do so here.