Measuring generative AI tools gives media companies a sense of their ROI

By Ariane Bernard

INMA

New York City, Paris

Connect      

Hi everyone.

“But does it make money?” 

At some point, we have to be able to answer this question if we want to continue investing in our data department and applications. And for generative AI, still young and fledging, this question is coming up a lot. 

So, this week, we will use an example from Aftonbladet from Sweden to think about “does it make money,” even if we don’t have a P&L item to show for our investment.

I’ll be out of pocket for our next newsletter, but I’ll look forward to seeing you in four weeks time. In the meantime, check out the programme of the data master class series in October. Lots of excellent things coming our way, and something to look forward to as well.

All my best, Ariane

How are you going to measure the impact of generative AI on your business?

Just last week, I was chatting with a non-publisher who was asking me if I knew about cases where generative AI was making money for publishers. Now, tools are really of two kinds: tools that scale humans and tools that extend humans. Those that scale us allow us to do things that we could do by hand, but we can now do more of it. Those that extend us allow us to reach things we cannot directly obtain on our own. 

Generative AI as a broad field is mainly a scaling tool — at present. And scaling tools tend to have an economic impact that we can more easily see in efficiencies than in creating new items in our P&Ls.

Just to sidebar this for a second, because this is a bit abstract, but I am a longtime builder of tools so I have thought about this a lot over the years. It’s really difficult to measure the impact of certain tools, but measure we must to know whether we are using our investment wisely. So if you are on the strategic side of the publishing business — thinking about how you’ll measure the impact and results of any generative AI investment — you have to know whether you’ll be looking at scaling humans or extending them because you won’t see your dollars in the same way.

A great example to identify the difference is robots that cut bars of steel. Humans could do this by hand but, of course, it takes a long time, lots of people, etc. This is a scaling technology. Meanwhile, lasers are a tool that extend humans. Humans cannot, by hand, do what lasers can do. Lasers opened entirely new ways to do things, extending humans. 

There are two types of generative AI tools that make money for news publishers: those that scale humans and those that extend humans.
There are two types of generative AI tools that make money for news publishers: those that scale humans and those that extend humans.

By the way, at present, generative AI is going to be mainly a scaling tool, but as technology matures, it often adds tools that are of the other type. You can think of the bike → car → airplane → spaceship continuum. Bikes are, arguably, still scaling tools because the efficiency of a bike only multiplies our physical ability by a few factors. But you certainly wouldn’t run that logic all the way to spaceships.

So, for now, look to how to measure your efficiencies to understand the impact of your generative AI investment. So when our non-publisher asked me about where generative AI was making money for publishers, well, I certainly didn’t know of many examples. But in fact, scaling tools do make money — they just don’t have a well-constructed line item in a P&L. 

Let’s look at an example of generative AI having a clear positive final impact for a publisher, but not in a manner that would create that well-constructed P&L line item. 

One of the areas of tension between newsrooms and their users is often the length of articles: It’s too long or maybe there’s an anecdotal lede and the main point is stated several paragraphs under the scroll. Meanwhile, one consistent complaint you’ll hear from journalists doing original reporting is how much they despair over their articles being cribbed by other publications and rewritten with, perhaps, a link back to the original source.

“When you do an audience survey, [users tell you] they want an overview, said Martin Schori, the deputy managing editor of the Swedish daily Aftonbladet, in a call with the Nordic AI alliance a few weeks ago. They want to be able to read maybe a short summary or the whole story. So they wanted alternatives. And we, journalists, we tend to write long articles about everything.”

At Aftonbladet, the team looked at using the powers of Open AI’s API to produce summaries available in three bullets and look at how this may move the needle. The bulleted summary is present everywhere, but users have to decide whether they want to engage with it to read it.

“The data says that this has been actually very positive,” Martin said. “Around 30% are clicking to see the quick version to expand it. And among young people, its especially high. It’s around 40%.”

But, Martin noted, one of the perennial fears of summarisation from reporters is that it takes away from their full reporting. 

“What surprised us was that it actually leads to more people reading the whole article,” Martin said. “And maybe it’s because they will get a better understanding of what this article is about, but it’s kind of peculiar because first, obviously, we have the headline and then we have a lead and then we have the quick version just below the lead.”

Yet, he notes, this does lead more people to read, overall. 

While I’m here: You should join us for our next Webinar on September 6. I will discuss this more in detail with Vebjørn Nevland, a data scientist at VG of Norway (a sister publication from the Schibsted group), which originated this work.

But I loved hearing this example because this is a great illustration of a generative AI application for an outcome where all benefit: Users are clearly finding the bulleted summaries engaging, and reporters (and therefore their publishers) are getting higher engagement all around, including for the content which was in the newsroom’s original vision.

It also highlights one dimension of making derivative use of the content we produce. In the end, it multiplies our audiences rather than takes away from the original.

If you’ve been around long enough, you probably remember some of the earlier anxieties newsroom had around live blogging when the newsroom also planned a polished story for print. Would anyone care to engage with this polished story if a live blog had preceded it? And it turns out of course — hindsight being 20/20 — a live blog does not cannibalise a polished story’s audience. The multiplicity of approach (or format) is, within reason, a pathway for both reinforcing an existing audience and finding a new one.

This takes us back to other experiences I am hearing about with generative AI technologies: 

There’s text-to-voice, which allows an audience to consume content in spaces where text may not be an option (to say nothing of users who may prefer audio for accessibility reasons, or, simply for personal preference). This is a scaling tool since, of course, we don’t need AI to read articles (but to read 300 articles a day with no humans involved would certainly be more efficient).

There are also translation applications for your content. Again, humans could do this too, but you’ll meet new audiences for a very cost-efficient investment. And this can be done at a scale, including for content that would simply go untranslated if you had to do it by human power alone.

No new P&L item. But certainly new coins in your house in the end. 

Further afield on the wide, wide Web 

Recently, I shared with you some great reads for your summer, most of which were more of the long-read, magazine types. Well, I have another one of these, and to use the proper term to describe, it is bomb — a very long explainer into how LLM works. It’s worth reading it in several sittings if it takes this to really absorb it. 

About this newsletter

Today’s newsletter is written by Ariane Bernard, a Paris- and New York-based consultant who focuses on publishing utilities and data products, and is the CEO of a young incubated company, Helio.cloud

This newsletter is part of the INMA Smart Data Initiative. You can e-mail me at Ariane.Bernard@inma.org with thoughts, suggestions, and questions. Also, sign up to our Slack channel.

About Ariane Bernard

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.
x

I ACCEPT