European media companies are leading the way with AI exploration
Smart Data Initiative Newsletter Blog | 16 November 2023
Hi everyone.
This week, I am raising my head a bit from looking at individual news organisations and their efforts building with brave new AI technology. There are certain trend lines — who is moving quickly and how our users may feel about this. And like all stories of innovation, it’s sometimes difficult to adjust our depth of focus to consider the minutiae of what we improve while also putting it in the more global context of our businesses and products.
So, this one is just one point of view — and there are many to be had on this, to be sure. Let me know what your own perspective is on this by hitting reply!
All my best, Ariane
In exploring applications AI, Europe is leading the pack
INMA’s data master class for the year just wrapped up, with four learning-packed modules that took us all around the world of data in the month of October. We made a longer stop at generative AI, as this year’s belle of the ball, and I want to share some of my thoughts around this.
One question I get very reliably is: “Who or where are the most innovative publishers on XYZ topic?” And the answer is often, well, “it depends.” But also I am struck by how often the answer is not reliably who you expect. It’s not necessarily the biggest publisher on the block or the one with the biggest tech organisation.
And let me tell you, with AI, this has come to be an especially common outcome. The biggest innovators, the ones embracing this newer technology and experimenting with it, stand out for being nimble and proactive — not the ones with the largest balance sheet.
The first thing I’ll say is: Look to the north — and specifically look to the north of Europe. There is a fantastic group of publishers in Scandinavian countries, in Benelux, in Germany, in Switzerland, who are leading the pack, mounting small and larger experiments like Ekstra Bladet in Denmark; building exploration teams like Aftonbladet in Sweden; relentlessly throwing things at GPT-4 to scale up automation like Ippen Media in Germany; building new products entirely from nothing like Bonnier News in Sweden or Mediahuis in the Netherlands.
Now, before you accuse me of just crowing for the euros as a French person, I’ll remind you that I also carry a U.S. passport and also that I have been known to diss European publishers when it comes to attitudes around competition or regulation. I’m really an equal opportunity praise and criticism giver. Also, I looked around hard and long when putting together the programme from the master class — months in the making! — looking for exciting things from LATAM, from the U.S., from Southern Europe or Asia.
And I’m not saying nothing is happening there, because, of course, there are interesting projects outside of European news organisations. I loved AP’s practical approaches to solve problems for smaller news organisations. But the volume of it, and the ambition of it, is really in the north of Europe.
In September, I attended the American edition of Newsgeist, the unconference Google convenes for journalists and technologists. A week prior, I had attended the European edition in Lake Maggiore. And one thing was striking: At the Euro Newsgeist, AI was *the* topic. For every time slot in the schedule, there was at least one if not several AI-related discussion sessions. Meanwhile, at the U.S. Newsgeist, there were not only fewer such sessions but they were also more theoretical.
Perhaps I should clarify here that Newsgeist, as an unconference, has no set agenda until attendees arrive to co-create a schedule together. So, while the method here hardly would meet any criteria for a statistically significant sample, the agenda does in fact tell you something about what is on the mind of the participants.
And I don’t think it was that AI wasn’t of interest among North American publishers, but rather, the topic of trust and fake news — which have all but receded as concerns among European publishers — still got a significant amount of attention in the U.S.
There is some academic work on innovation and turning points. One avenue: incumbents competing more aggressively (see this 2017 paper from McKinsey). I would hardly call mega-groups like Schibsted or Axel Springer incumbent, of course, but in general, European publishers don’t have super heavyweights like a New York Times or a Bezos-backed Washington Post.
And it also reminded me of what happened with the growth of the Internet back in the 1990s and early 2000s: The U.S. got the Internet into the homes of regular Americans first, but Europe got DSL and other affordable fast Internet into more European homes much quicker.
Slower to start, but faster at rolling in the second and third generation (still true with fiber Internet). I’m not comparing this to the practices of news organisations when it comes to data and Artificial Intelligence because there are very different forces in play. But this is all to say that there are many other instances where having had a slower start ends up fueling a more efficient wave of innovation in a later phase.
One thing that we know, too, is that Europeans have historically looked at automation and Artificial Intelligence more favourably than Americans:
Back in late 2020, the Pew Research Center found that while 41% of Americans were looking favourably at automation and 47% at the development of Artificial Intelligence, these numbers are weaker than what they are in Northern European countries. These same numbers are 66 and 60 in Sweden, and 48 and 47 in Germany.
Polled along similar lines in April 2023 (but not country-by-country), Americans were still more tentative than Europeans, albeit by a smaller margin: 66% of Europeans said they felt AI required careful management, compared with 71% of Americans.
Your users are watching you — proceed carefully
But there is another side to this coin, which is, of course, our users: readers, viewers, listeners, whatever they are to you. For this, my colleague Greg Piechota, who leads the Readers First initiative at INMA, happened to have sent me a very interesting academic study from the University of Zurich on how Swiss news consumers approached AI in the news.
The big takeaway is that in the same territories where publishers are warmer toward AI, the users are significantly less warm: “Just under a third (29%) of respondents say they would read news items written entirely by AI. In comparison, 84% would read texts written by journalists without the use of AI,” the study notes.
Readers feel differently about their response depending on how “serious” the topic feels: Celebrity gossip fares better than politics.
And, the study notes, “There is broad consensus that AI-generated (87%) or AI-assisted (83%) content should be transparently declared as such by the media.”
There is likely — and the study notes this, too — something there which reflects a lot of the headlines that underscore some of the issues of generative AI and hallucinations. In this respect, as publishers are also well aware of these issues, our audiences may be anticipating problems with the assumptions that publishers are not being sufficiently careful. And pretty much every publisher I have spoken with in the past few months has no plans to let ChatGPT freelance without an editor to oversee the work.
But, and this is where, particularly as publishers continue experimenting with these technologies, we probably have to spare no expense reminding our users just how much oversight and humans-in-the-loop are involved wherever we leverage AI tools.
Users don’t feel less safe because we’ve automated much of the production chains that make their cars. In fact, this adds reliability to the production of these complicated devices. But in large part, this is also because there are a host of certifications and regulations that reassure customers automation did in fact raise the bar of quality even if fewer humans may be involved.
There is an analogy there for publishers as they bring in additional automation to the shop: We need to remember these efforts have to come with as much reassurance to our audiences that this improves the product and be ready to prove it.
Further afield on the wide, wide Web
Some good reads from the wider world of data:
- Bless INMA’s editor for allowing this fresh-from-the-Internet article way past my deadline: Ed Newton-Rex is (was) the VP of audio at Stability AI and wrote an op-ed about why he just quit his job there: He disagrees with his company that training AI on copyrighted material without license falls within Fair Use. Filed under “Relevant to our interests” (Musicbusinessworldwide.com)
- We know that our AIs are biased — just like humans are biased — but sometimes it’s great to have a sober moment when our biases are staring right back at us: This is how AI sees the world (The Washington Post).
- Fake News, but make it the compounded errors of AI and bad captions: Adobe selling generated AI pix of the fighting in the Middle East, which ends up being used around the Internet without this context. This isn’t quite as insidious as “regular” fake news but asks questions for how to make sure such images come in with unambiguous markings (Crikey.com.au).
- … Also from the book of “a series of unfortunate events,” this cautionary tale when Microsoft AI embedded a “crass” poll next to a Guardian article. No one was happy (NYT).
- Do you remember the mad cow disease crisis? Mad cow disease leads to a variant of the fatal brain disease Creutzfeldt-Jacobs in humans and traumatised Europe in the late 1990s. One of the causes of mad cow diseases was the kind of feed given to cows, which included bovine products. If this doesn’t feel recursive and against nature to force animals to self-cannibalise, well, consider that there is potentially the same at play with the way LLMs can learn. Now that I’ve connected two totally different things in the same idea, you can read a post from Sesh Lyer at BCG X, who writes about MAD (model autography disorder), the issue of models feeding on their own content creations (Sesh Lyer via LinkedIn).
- A delightful Insta reel for you, where the actress Julia Louis-Dreyfus delivers the speech that she asked ChatGPT to write for her acceptance of The Wall Street Journal Innovator award. I’d hate to spoil it, but let’s just say that ChatGPT really leaned into the words “Wall Street” here. There is also only one actress named Julia in all of Hollywood. In related news: Journalists, your job is safe (Instagram).
- And now, I hope you enjoyed that easy Insta clip because I saved the meatiest one for last. And I will say, I struggled a bit with it because I’m not a politics hound and this article is full of names of Washington insiders and government groups, which was a lot to keep track of. I almost asked ChatGPT to summarise but then I remembered that ChatGPT is a bit of a liar, so I just had to focus. But if you can swing it, it’s great: a profile of Bruce Reed, President Joe Biden’s advisor on AI. While it’s a profile of Reed, it’s actually a history of Washington’s dealings with Silicon Valley on privacy, online platforms and, now, AI. Particularly interesting that while much comes out of say, the EU, on these topics, much less comes out of the U.S. But, as the article tells the story, we’re really looking at the ways the U.S. has attempted at times to take on the issues, but then ended up walking away from it (Politico).
About this newsletter
Today’s newsletter is written by Ariane Bernard, a Paris- and New York-based consultant who focuses on publishing utilities and data products, and is the CEO of a young incubated company, Helio.cloud.
This newsletter is part of the INMA Smart Data Initiative. You can e-mail me at Ariane.Bernard@inma.org with thoughts, suggestions, and questions. Also, sign up to our Slack channel.