ALERT: INMA, GNI to unveil CMS Vendor Selection Tool 2.0 at town hall Wednesday, register free here

Ringier Axel Springer, DPG Media share insights into leveraging Generative AI

By Paula Felps

INMA

Nashville, Tennessee, USA

Connect      

Generative AI is praised for its ability to provide answers, but for news publishers, it often has raised more questions. As the news media industry looks at how to harness its power and avoid its pitfalls, most companies are still working to understand it fully.

As lead of the new Generative AI Initiative, Sonali Verma is walking with INMA members through unchartered territory: “Everyone is on the same journey, just at different points. For the most part, people are trying out low-risk experiments, low-hanging fruit, [and] they’re moving pretty fast because everyone wants to learn quickly,” she told INMA members during Wednesday’s Webinar, 10 smart ways to use GenAI in the news business,

“Gen AI is a bit of a double-edged sword: It opens the door for us to reach new audiences because we can quickly convert to different formats and different languages in a way we haven’t been able to before. But [companies are] worried about the impact of search generative experience.”

Lars Anderson, DPG Media; Sonali Verma, INMA; and Inga Apiecionek, Ringier Axel Springer provided an overview of how news companies can begin to use Generative AI.
Lars Anderson, DPG Media; Sonali Verma, INMA; and Inga Apiecionek, Ringier Axel Springer provided an overview of how news companies can begin to use Generative AI.

Verma welcomed two presenters who are leveraging GenAI in their companies: Inga Apiecionek, AI solution manager at Ringier Axel Springer in Poland, and Lars Anderson, head of innovation at DPG Media in The Netherlands. Both said that their companies began working with AI in the newsroom but now are looking at how it applies to all departments.

Ringier Axel Springer’s AI principles

One key step for the company was to create its own principles for using AI, which included taking responsibility for the content it publishes, understanding the limitations of the technology, and respecting copyright and personal rights, Apiecionek said. The principles it developed were supported by top management and were shared with the entire company before engaging with AI.

Engagement followed a three-step process of informing, educating, and then finally engaging. Education was provided via “workshops and playing together with a huge hackathon.” The company-wide hackathon helped teams across the organisation better understand the technology.

“We create each team as an interdisciplinary to connect people from tech, product, and editorial in one place to share the knowledge from different angles,” Apiecionek said.

The company also introduced an AI implementation funnel to “build trust and engagement based on knowledge.”

Ringier Axel Springer introduced an AI Implementation Funnel.
Ringier Axel Springer introduced an AI Implementation Funnel.

 “We know that people are always afraid about something they don’t understand,” she explained. “So we created that framework and made a lot of places to share the knowledge.”

As employees became more comfortable with the technology, Apiecionek said, it was time to choose the best place to use it. The factors determining this included what would provide the highest value with the lowest risk and effort required.

For Ringier Axel Springer, a logical starting point was with quizzes.

It built a plugin for its CMS that allowed the story author to provide a topic for a quiz and choose the number of questions (and provide answers).  

“The tool generated quizzes, and after the first test, the numbers were very high. It sped up working on quizzes and we were quite happy.”

However, reporters were reluctant to use it, and Apiecionek said many feared this was the first step in replacing them. It was important to assure them that this was a tool to help them, not eliminate their jobs.

“The best idea [we had was] to build a bridge between editorial teams and people from tech and product,” she said.

One person was designated as the leader to connect tech and editorial teams. Apiecionek said this was an important step in editorial becoming more comfortable with AI and it helped the tech teams understand the fears of the editorial side. Now, they work together.

Saving time with AI assistants

The first thing Ringier Axel Springer focused on with article content was reducing the time spent creating content. It created ChatGPT-based assistants for such things as generating story titles, SEO, headlines, summaries, and tags. However, she reminded, humans always have the last say.

“All of it is only a help; it’s a kind of proposal, and everything should be chosen by the editor. So the real responsibility is still on the man who is publishing this story.”

Leveraging AI means identifying where it can provide monetary value, and Apiecionek said that at Ringier Axel Springer, that value lies in content personalisation: “We found that personalisation of our Web page and providing the best content, the best article, for a given user is the best approach.”

By combining AI Web page segmentation and using GenAI title recommendations, the company has seen its CTR by 7% and its total time spent per impression by 8%.

“Personalisation gives us real advantages,” Apiecionek noted, adding that it has seen better targeting, up to a 50% increase of CTR, and has boosted loyalty and time spent on the site.

“We are still testing, but we see that the new technology gives us a lot of advantages.”  

DPG’s GPT guidelines

One year ago, the brands within DPG Media — much like everyone else around the world — were experimenting with ChatGPT. However, no guidelines or guardrails were in place, so DPG created a small community to share information and ideas.

“We had people from advertising, marketing, sales, and obviously from the newsrooms,” he said. But the group was small and Anderson said it was important to build a bigger community to bring in more insights and expertise.

“So we [created] this big AI community and … we had somebody who was very good at guardrails, somebody who was very good at legal privacy. Everybody who wanted to join this community could join.”

It also created a Slack channel that now has around 300 people “who are very enthusiastic about Generative AI and want to know everything about it and want to be inspired.”

DPG Media created an AI Community to share ideas and create guidelines for using AI.
DPG Media created an AI Community to share ideas and create guidelines for using AI.

Building on that, it created an AI Hub to validate hypotheses and experiment with ideas; if the experiment is successful, it goes back into the AI community for more development.

“These were the early ideas; from there, we created a core AI team. It now also has a lead [called] the head of generative AI transformation [who] focuses on strategy, technology, legal, and on knowledge sharing and training.”

Today, any person from any department can approach the GenAI team with an experiment or project idea. Not surprisingly, most of the projects are for the newsrooms, and Anderson said one of the first things they did was set guidelines for using GenAI.

These include checking the information provided by AI, being transparent about its use in text, images, and videos, and turning off chat history in ChatGPT so prompts are not used for training the model. From there, it created five more specific guidelines dealing with issues like trust, intellectual property, safety, and inclusivity and diversity.

As part of its exploration into AI, DPG is working with five PhD candidates conducting research on using AI in the media.

“This is one of the bigger projects that we are working on right now,” Anderson said. “These PhDs will focus on legal aspects of using AI in the media, explaining recommendations, synergies between platforms, network perspectives on content, and generating content. Everything that we learn in this project will go back to the knowledge, and we can use it within the company.”

Anderson also addressed some of the pitfalls of ChatGPT and GenAI in journalism, such as the possibility of reporters relying on false information for their stories and increased distrust in journalism because of the large error margin in AI models.

“We should be really careful as a news organisation [about] when we use AI, how we use AI — and be transparent about it,” Anderson said. “So we have major pitfalls, but we also have a lot of opportunities. I’m very optimistic about AI as a helping hand.”

About Paula Felps

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.
x

I ACCEPT