Schibsted is making audience gains with AI-driven personalised audio playlists
Generative AI Initiative Blog | 18 August 2024
Are you thinking about using GenAI for audio?
According to an impromptu recent straw poll on a recent INMA GenAI Webinar, almost everyone is either trying it already or contemplating it.
The reasons are many. Podcasts can be monetised well, and they typically help news brands reach younger, richer audiences, as Svenska Dagbladet’s head of product and UX Ebba Linde, pointed out during the Webinar.
The Schibsted-owned brand started experimenting with GenAI audio recently after its Norwegian sister company, Aftenposten, found articles read in its synthetic voice were effective at engaging readers.
Svenska Dagbladet tried using a cloned voice, but it did not work well in Swedish. Undeterred, it turned to manually recording a few articles a day to see if its readers would appreciate them.
“The results blew us away,” Linde said. Older and younger consumers alike provided positive feedback. The only negative feedback was they could not find where they could access more audio. In addition, some users are not keen to keep pressing the “play” button, since they may be using their hands for another activity (such as driving or cooking).
So, Svenska Dagbladet built an AI-driven playlist that automatically plays another audio article. The technology taps into user-intent data as well and is based on Schibsted’s in-house personalisation recommendation engine for text articles. (For comparison, here is how Amazon Music and Spotify do it.)
Svenska Dagbladet now uses the synthetic voice for some shorter articles, such as the morning briefing, and human voices for longer reads because it has found users generally still prefer the human voice to the AI clone sometimes, “and listeners are super happy.”
Here’s a look at another experiment: A radio station in Switzerland, Couleur 3, let AI run its programming for 13 hours. The voices, the scripts, and even the music were created by AI at this radio station, which is part of the Swiss Public Broadcasting Corporation.
“We wanted to have this as the starting point of discussion with our audience to understand how it feels like to listen to radio that is made by a computer,” said station head Antoine Multone.
“We now use AI for a lot of tasks. It helps us with live transcription and with tagging content. We want to use it to distribute our content in a better way… . For any of these tasks, a human will check everything that was made by an AI, so we never leave AI by itself. We use it to go faster, and we use it to do things that we don’t have time to do.”
These applications are in addition to previous audio initiatives we have written about at INMA. Based on AI companies’ investment in voice technology, we should expect them to only get better, according to Wharton Professor Ethan Mollick.
“I think that voice capabilities like GPT-4o’s are going to change how most people interact with AI systems. Voice and visual interactions are more natural than text and will have broader appeal to a wider audience,” Mollick said.
“The future will involve talking to AI.”
If you’d like to subscribe to my bi-weekly newsletter, INMA members can do so here.