Media should go cautiously but excitedly toward generative AI

By Ariane Bernard


New York City, Paris


Hi everyone.

I come to you with the satisfaction of the person who turned in their work on time (mostly? INMA’s editor may scoff) and a report on generative AI now finished. You’ll see it very soon! Not so great news on the front of my taxes but “April is the cruellest month” …

So for this week, I am sharing the interview I did with Charlie Beckett, the director of Polis at the London School of Economics, where the Journalism AI project is based. I interviewed Charlie for this report, but instead of keeping this to myself, I’m sharing the whole thing with you. 

Our next post will probably have bits from the report.

Til later, Ariane

My interview with Charlie Beckett

I spent 40 minutes recently talking with Charlie Beckett, director of Polis at the London School of Economics. The beginning of our interview follows. And if you’d like a deep dive, here’s my mostly unedited interview with him.

Charlie Beckett on generative AI: "At some level ... it's just a form of autocomplete. All it does is predict the next word. It doesn't know what it's talking about."
Charlie Beckett on generative AI: "At some level ... it's just a form of autocomplete. All it does is predict the next word. It doesn't know what it's talking about."

Ariane: You have a unique perspective because you’re a media specialist, but you’re not trying to sell technology and you’re not a publisher. So from your perspective, what is urgent for a publisher to get on when it comes to generative AI?

Charlie: I can talk about this without a personal stake. Everything I say is repeating what other people have said to me. So I’ve been listening … and I think there's a lot of really interesting speculation at the moment. Much of it is kind of fictional. A little bit is kind of ideologically or imaginatively driven. And it’s kind of racing ahead of the technology. And that’s not a bad thing. 

But I would say the first thing that people should do, drawing upon in the last five years of us looking at this, is just get across the basics. Before you start thinking about ChatGPT or Midjourney or anything like that, just wind right back and make sure that you’ve got any sense of what we’re really talking about. 

I don’t mean that in a patronising way because I don’t know. I’m not a technologist. I spent the last five years trying to get across and understanding the concepts and so on. But leaping off the top diving board is not the way to proceed. And it’s not just about you personally — doesn’t matter whether you’re the CEO or whether you’re a foot soldier journalist. Everyone just needs to get across some basics about it because Artificial Intelligence does not exist. It’s a terrible term. 

And I’m cursed, I think, to have a project called JournalismAI because there is no such thing as intelligent, artificial computer programming. It’s all much more mundane and much more various as well. It’s everything from you know, kind of machine learning to automation to personalisation. We use it every day. When we do search, for example, we’ve been doing that for about a decade. 

So this isn’t something completely new. And at some level, even with the new generative AI, it’s just a form of autocomplete. All it does is predicts the next word. It doesn’t know what it’s talking about. It has no … intelligence. It doesn’t know any kind of concept, let alone emotions or feelings or insights at all … [it] is purely programmed by humans. And then, as you know very well because you know more about the technology, it’s become exponentially good at doing this, learning from the data and being able to give the appearance of being able to do quite extraordinary things and that’s what’s different. 

I think about the generative AI. I am very excited about it. I think that its creative potential augmenting journalists is amazing. But first of all, before we get there, make sure that you and ideally your colleagues [know] the basics … been linked to the basic training courses that we do. So you’ve got to walk before you can run.  

And the second thing I would say is about the sort of generative AI … is don’t do anything with it. Or at least don’t do anything that is direct to publishing with it. Don’t do anything that risks the quality of your editorial processes. And that could be from news gathering right through to publication. At the moment, [it] is much too volatile. It’s much too risky to do anything that goes to print. 

Of course, I would absolutely recommend that you experiment with it. That you start playing with it. I would suggest you do not write checks to people to buy tools and programmes yet until you’ve been through a process of thinking through: What problem does it solve? Does it do it any better than what you had before? And do you understand how it works? If you can’t say yes to all those things, then [do] you know why are you doing it? 

So I mean, we said this about AI generally. And to be honest, I think it applies to many of the technologies and tools and processes frankly, you know, you wouldn’t hire a fantastically clever intern, age 19, and make them editor of your publication just because of their potential. And because they sound brilliant. You’d want to know what they could really achieve and how they fit into your processes, and also how your processes might have to change. You know, I think this is the implications of AI, generally — or normal sort of generative AI are probably even bigger.

The last bit I’d say is that journalism is different. I think this set of technologies has got risks and opportunities … . You know that old adage about technological change — we always exaggerate the short term and underestimate the long term — I think is really really true of this one in terms of journalism. Journalists have gone around saying, “Wow, this thing is inaccurate. I gave it a really clever, weird prompt and it got it wrong.”

And this is going to take over all our jobs and it’s going to end the world literally, you know, they’re talking about the end of the world. No, certainly not overnight, but longer term. I think it may well reshape the nature of news information and the industry and the way that we work just as it’s going to have interesting implications for everything from health, you know, to retail to security, etc.

Further afield on the wide, wide Web

Some good reads from the wider world of data. This week: 

• My worlds colliding in the best of ways: Taylor Swift + generative AI (fair warning, the Swiftie content around here is going to become unbearable as I get nearer to my concert date). Anyway, so this disclaimer out of the way, The Atlantic reporting on the use of synthetic voice generation to generate voice clips from folks like Taylor and the response from the AI companies that enable it. (Paywall link)

• Sam Altman, the CEO of OpenAI, chatted with Kara Swisher in her podcast hosted by New York magazine. Of note, he discusses the peculiar kind of corporate structure OpenAI operates under (it has capped investor earnings, which is interesting). Meanwhile, in the super interesting profile from The New York Times (gift link), I learned that Sam Altman has no shares in OpenAI and makes 65K a year in salary. Things look different when you have limited financial stake in the game. (I’ll be honest, this last link is the most interesting this week — yes, more than the Taylor Swift AI story.)

• The Guardian interviewed the technologist Jaron Lanier for his all encompassing perspective on AI. Because I was lazy, I asked ChatGPT for her summary: 

“The article features an interview with technology expert Jaron Lanier, who argues that the real danger of artificial intelligence is not that it will destroy humanity, but rather that it will drive us insane. He suggests that AI, if left unchecked, could exacerbate existing social and economic inequalities and lead to widespread mental health problems, including addiction and depression. Lanier also stresses the need for greater regulation and accountability in the development and deployment of AI technologies, as well as the importance of prioritizing human dignity and well-being in these endeavors.”

And this is a correct summary, but what this doesn’t capture is all the commentary on, in general, the Internet today and being a human living with computers all around us (awesome analysis of Twitter feat. Trump, Kanye, Musk). 

So, read for the perspective AI, stay for the general commentary.


About this newsletter

Today’s newsletter is written by Ariane Bernard, a Paris- and New York-based consultant who focuses on publishing utilities and data products, and is the CEO of a young incubated company,

This newsletter is part of the INMA Smart Data Initiative. You can e-mail me at with thoughts, suggestions, and questions. Also, sign up to our Slack channel.

About Ariane Bernard

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.