ALERT: INMA, GNI to unveil CMS Vendor Selection Tool 2.0 at town hall Wednesday, register free here

Newsrooms should consider 3 insights as they experiment with generative AI

By Ariane Bernard

INMA

New York, Paris

Connect      

These days, I have the good fortune to hear about so many interesting projects kicking off using generative AI, at publishers of all kinds of size and experience with this type of technology.

And this is not just exciting because allowing a thousand flowers to bloom is going to help this industry identity where this type of technology can be most helpful at its current stage of advancement, but also because the spirit of experimentation itself is an excellent muscle to build. <Loudspeaker announcement> Oh, while I am here, please do write to me to share your current experiments so I know to keep watch </announcement>

Whenever we experiment though — and this point isn’t just about data, of course — we also lean onto another muscle, which is change management. And the chat I recently had with Ernest Kung, a product manager at The Associated Press who has been focusing on AI projects, gave me some really useful perspective on what this may look like for projects that involve generative AI.

Ernest and his colleagues at AP are working on 5 different projects using AI to support small, local newsrooms, with support by the John S. and James L. Knight Foundation. Except for the project in Puerto Rico, which uses NLP and structured data, the other four would be likely to use generative AI technology.

These are the five projects, listed verbatim from The AP:

  • Automated writing of public safety incidents into the content management system of Minnesota newspaper Brainerd Dispatch.

  • Publication of Spanish-language news alerts using National Weather Service data in English by the newspaper El Vocero de Puerto Rico.

  • Automated transcription of recorded videos and summarising the transcripts to create an article’s initial framework at San Antonio, Texas, television station KSAT-TV.

  • Sorting of news tips and coverage pitches from the public and automatically populating them into the coverage planner of Allentown, Pennsylvania, television station WFMZ-TV.

  • Expanding the Minutes application, which creates transcripts of city council meetings, to include summarisation, keyword identification and reporter alerts, for staff at Michigan Radio’s WUOM-FM at the University of Michigan.

The projects are currently being worked on, so this newsletter isn’t a case study into them (yet, come back in a few months!), though they present an excellent range of ideas for where generative AI can come to support journalism in improving productivity of daily, tedious tasks. They are also ambitious enough to have a shot at making visible improvements for humans, but not so ambitious that they are not achievable. 

Speaking with Ernest, and taking stock of how the projects were doing, he shared three insights I thought were absolutely worth your own time as you consider your own generative AI projects.

1. Your generative AI project may be challenging for reasons that are entirely unrelated to generative AI

There are already a good number of applications of Natural Language Processing (NLP) that transform speech into text, and this has long been used for transcriptions. Of course, this is largely used in the project AP is running with Michigan’s WUOM-FM radio to transcribe and further process city council meetings. 

But the environmental factors of the city council meetings are a large variable to take into account in the project — not so much because transcription technology may not be up to the task, but rather because the underlying audio gathered in this kind of setting has proven to be very challenging. Painting a picture for me of what the room, mike set up, participants all may look like, Ernest noted that there was a kind of hard-edged limitation at play when it came to the audio recording.

Now, there are also various AI-powered tools that will clean up audio, and I’m sure spy agencies have some even better tools we know nothing about to reprocess weak audio. But somehow, they don’t license their tech. The reality is, in this day and age, we have a certain bar of audio quality to clear for speech-to-text to be an option.

2. Humans give automation a much higher bar of quality to clear than what they give human labour

Ernest was telling me about the project with WFMZ-TV in Pennsylvania where an AI-powered app aims to classify incoming e-mails from the public to create a coverage calendar. 

There is a training component to this project because, as Ernest noted, what is newsworthy is highly dependent on the specific news organisation using the feature. You can think of the approach as somewhat similar as training a spam filter.

Ernest noted there was a high bar of quality to clear for the newsroom team to be able to rely on the new system. If there’s as much labour in fixing or verifying the material, does it help anyone? And this is true in general for anything system automation replacing human labour: We’re less tolerant of approximation from machines in general (to wit: humans get into car accidents, but self-driving cars won’t take over our streets until their accident record is far closer to zero than ours would ever be).

This reminded me of a few projects I have worked on that involved algorithmic sorting — and how humans react when sorting is incorrect. One example of such a system is Perspective, the Google API that tries to score comment threads for their toxicity. News organisations that use this system decide at what score they may want to automatically block a comment and at what score they may want to automatically auto-publish a comment. In the middle, comments that have a score that’s neither too low nor too high are therefore left for human moderators to handle.  

In such an approach, the automation has diminished the size of the original problem. Where humans used to have to read 100 comments, they may only now be reading 20. But this only works if the 80 comments that got automatically sorted in the “never publish” or “auto-publish” piles are correct. Where the algorithm makes mistakes, humans are far more unforgiving than they would of a fellow human blundering. 

3. Making an automation project succeed in a smaller organisation depends on workflow questions as much as on the automation itself

One area Ernest and his colleagues are paying attention to is how new automated tools may be able to fit in existing workflows — or if the teams that will receive these new tools are willing to make changes to their workflow so they can best use these tools.

Although AI is intended to help workflow, team leaders must understand new tools will still be a disruption.
Although AI is intended to help workflow, team leaders must understand new tools will still be a disruption.

This is a bit of a paradox of many a news organisation — whether the smaller ones with limited resources like the ones Ernest is working with or the large ones with lots of resources: Workflow is usually honed with the precision of a factory. While having strong processes usually creates clarity, we have to recognise automation almost always is a form of disruption into existing processes. 

Even the process of providing algorithmic training to an AI-powered system that’s currently in its learning phase can be a disruption. And, of course, a data scientist would say, “but that’s really an investment so you can eventually eliminate certain tasks in the future.” But the resistance of humans to process changes can be very high. 

Furthermore, as Ernest noted, we’re talking about folks who often have very long days and who, in the past few years, have had to absorb extra work as downsizing in their organisation consolidated several job functions into one role. Do these folks really have the brain space to cheerfully take in a period of disruption all in the name of some future, still-hypothetical improvement?

Ernest’s understanding of this predicament is helping him approach these projects with patience and compassion, but it is a lesson to consider for all organisations that are approaching AI-powered projects where training is going to come from internal staff as an add-on to their already full plates. 

*

A common thread within these observations is that building internal tools — especially tools that are built on still-fledgling technology — has a large amount of people management, in addition to good product management and data science. The right project isn’t necessarily the one where the technology is the best fit but rather one where humans and technology can live in harmony.

If you’d like to subscribe to my bi-weekly newsletter, INMA members can do so here.

About Ariane Bernard

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.
x

I ACCEPT