Generative AI in journalism: Publishers are in the driver’s seat

By Cecilia Campbell

United Robots

Malmö, Sweden

Connect      

ChatGPT was released in beta back in November. Since then, there’s been a phenomenal amount written about generative AI and how it will affect journalism and the media industry.

The discussion tends to be pretty black and white: Is AI saviour or enemy? Will it make or break the news industry?

I suggest this is the wrong question. I think we should instead take a step back and ask: What can generative AI do for journalism, and what can’t it do? And — most importantly — what role should people play in this process?

AI can improve work processes in the media industry, but it cannot produce journalism.
AI can improve work processes in the media industry, but it cannot produce journalism.

After seven years of providing automated articles to newsrooms (using a different type of AI), at United Robots, we’ve heard it all before. The fears of robots stealing jobs; of factually incorrect, untrustworthy content written in robotic language.

So, given United Robots’ pretty unique experience in this field, this is our take on generative AI in the context of journalism.

Let’s use ChatGPT as the starting point. After all, the free access to its beta version is what set off the recent furore. I can’t stress enough that ChatGPT is just a tool — albeit a brand new, powerful tool with huge scope — but a tool nonetheless. It does not change the guiding principles of journalism — a fundamentally human activity.

Of course, this type of AI can be used for nefarious ends, but so could the printing press. We are in the business of journalism, and we should work out how the new tools can help us do that even better — as well as identify what risks may be involved.

In mid-January, Futurism broke a story that perfectly illustrates the latter. Publisher CNET is using AI to write short financial articles but has not been open about it. Some of the aspects of this story shine a bright light on the choices publishers have, irrespective of what type of AI they use:

  • Transparency: We always recommend that AI written articles have a byline, making it unequivocally clear that it was written by a robot, not a reporter. Transparency is critical internally as well as externally and key for trust.
  • Accuracy: It goes without saying that any content published within a journalistic platform needs to be correct and reliable, whether it’s a ground-breaking investigative piece by a seasoned journalist or a small article about a local football match or financial news. AI tools always need to be controlled by journalists. And, if you’re going to auto publish AI-generated texts, you cannot use generative AI tools like GPT-3/ChatGPT (see the fact box for an explanation.)
  • Trust: The issue of trust really encompasses both of the above. Trust is the currency of journalism. Any deployment of new tech tools must in no way leave room for people to question the integrity of a publication.

Having said that, we’ve found that readers are generally happy to embrace robot-written content, as long as the information is valuable to them and clearly labelled.

If a publisher asked, “What does generative AI mean for our business?,” I’d like to ask back, “What do you want it to mean? The AI is not in control, you are.”

I would advise publishers to keep focussing on delivering solid, valuable journalism and use generative AI tools where they are helpful in this mission.

Charlie Beckett, director of the JournalisAI project at LSE expressed it perfectly in a podcast recently, saying that these tools cannot ask critical questions or work out the next step in investigating a story. However, they can support journalists in doing this work.

“But I think it’s even more interesting how it puts a kind of demand on those journalists, saying OK, you’ve got to be better than the machine. You can’t just do routine, formulaic journalism anymore because the software can do that.”

We’re only at the beginning of exploring how generative AI can support the business of journalism. Trying out ChatGPT is easy. Working large language models into robust and useful processes within a publishing business will be considerably harder.

It will be crucial to keep a razor-sharp focus on the uses you’re trying to extract from the tech and not get side-tracked by its inherent capabilities.

At United Robots, we’re testing a number of possible uses for large language models, including prompting them to turn text into structured data (our “raw material”), also attempted elsewhere. It’s early days, there are lots of opportunities, and the measurable use and value we can derive from this tech is what will ultimately determine how we deploy it.

Good journalism is about people — those who produce it and those who consume it. It’s about the unique work and voices of great reporters, which can’t be replaced by ChatGPT. It’s about meeting the needs and expectations of readers in a way that differentiates your publication from other publications. Large language models are not able to work out what your unique product should be.

AI can help improve our work processes, but it cannot produce journalism. Publishers are in the driver’s seat.

About Cecilia Campbell

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.
x

I ACCEPT