Opportunities, challenges of AI are top of mind throughout news media companies

By Nevin Kallepalli

New York, United States

Connect      

By Sarah Schmidt

INMA

Brooklyn, New York, United States

Connect      

By Dawn McMullan

INMA

Dallas, Texas, USA

Connect      

The single most-discussed — in a thrilled or anxious way, depending on your perspective — point of discussion among the panels, study tours, and workshops of this year’s INMA World Congress of News Media was, not surprisingly, AI.

Sara Fischer, senior media reporter at Axios, pointed out during one of her daily summaries of the Congress that it took news media publishers a decade to get on board with the full force of social media: “The fact that ChatGPT rolled out less than six months ago and that is the entire topic of this Congress shows you how much we have evolved as an industry.”

During the Congress’ Smart Data Workshop, Jessica Davis, senior director/news automation and AI product at USA Today, asked attendees to stand up if they were excited about AI, then to stand up if they were nervous about AI. Some did both, but most landed on excited.

Jessica Davis is senior director for news automation and AI product at Gannett.
Jessica Davis is senior director for news automation and AI product at Gannett.

As with all the seismic shifts in technology that have happened before — the advent of the 24-hour news cycle, the Internet, and, most recently, social media — enthusiasts are finding creative ways of utilising AI toward more efficient workflows, new means of advertising, and increasing readership. Critics fear without proper regulation, AI will spread misinformation and eliminate human touch, which is essential to the quality control of journalism’s most precious ideal: the truth. 

Most at the weeklong conference fell somewhere in the middle. The experts in the field demonstrated healthy skepticism while pondering what the possibilities are in harnessing the power of this emerging technology. 

“The quest to protect provenance has entered a fresh phase ... with the rapid evolution of generative AI, which certainly has the potential to be degenerative AI,” said Robert Thomson, CEO at News Corp. “The task for all here is to ensure that we are AI alchemists and that it becomes regenerative AI.”

Praveen Someshwar, managing director and CEO of HT Media Group, said AI brings “massive opportunity” but the the job of news publishers still is to reach audiences and amplify factual content.

“Generative AI is going to change the way content gets amplified and it’s going to change the way audiences consume it,” he said. “So it’s both an opportunity and a threat. The threat is that it is getting trained from the content we and others create, and we creators may not be rewarded in a balanced manner. If publishers and Big Tech can work together, there is a massive opportunity for both. But Big Tech has to share the spoils.”

Anoushka Healy, chief strategy officer at News Corp, urged caution to World Congress study tour members: “A lot of money could be spent at great speed with not necessarily the results we need because there is a sense of a rush.”

There is value, Healy said, in knowing AI is here for the long haul.

“That’s not a way to say don’t do anything, but I think it is a moment to recognise the strength of your own business, content, the value of that content, the value and nature of the relationship with the reader. We know how imoprtant that is for our business. We know nobody outside of media fully understands the dept of trust from that audience.

“This is a moment to double down on what you are as a business and then ... deciding where to jump off. Think about where is the jumping off point for you? You don’t necessarily need to leap to the biggest thing first. There is value in cool, calm, collected ways with your management teams. We’re just going to listen, learn, engage, remind ourselves what we are absolutely incredible at doing. What we care about is bringing in tech that will be absolutely transparent with our journalists and their relationship with the readers. Doing it the right way with the right speed is the best way [to build trust].”

Most important benefit: more efficient editorial, marketing, advertising teams

“There are so many potential positives — particularly in efficiency, in terms of automation and enhancing the work of the newsroom,” said Sinead Boucher, CEO of Stuff in New Zealand. “But it’s going to take a lot of focus and effort to ensure things go right. Right now we’ve got the kind of implementations many others have. For instance, using AI to start layouts and do summaries.”

AI use cases in journalism from  Cantellus Group.
AI use cases in journalism from Cantellus Group.

At The New York Times, algorithmic programming supports curation and editorial judgment.

“We always start with the editorial consideration,” said Derrick Ho, deputy editor for homescreen personalisation. One example: the Times’ “In Case You Missed It” homepage module. Editors are able to choose important, underexposed stories to add a pool the module can pull from, while an algorithm identifies readers who have not yet read them. 

Many newsrooms see AI as a potential time saver with the ability to do simple press release rewrites or summaries. 

Hearst Newspapers has developed an audience bot called Producer-P that its local newsrooms can use to save time on repetitive optimisation tasks. It suggests Web headlines, SEO keywords, and URLs and summarisations.

Tim O’Rourke, HNP’s vice president of content strategy, told World Congress attendees during a study tour that the tool saves editors about five minutes per story, which can come to a major savings when you consider more than 200 journalists are using the tool to post hundreds of stories a week. 

And many journalists are making use of AI’s ability to support innovative storytelling platforms.

“I like AI tools when they’re augmenting my creativity as a human as opposed to pretending to be human,” explained Joe Posner, head of video for the news start-up Semafor. He cited Semafor’s Witness series where footage of eyewitness accounts of events, like the Russian invasion of Ukraine, is combined with artists’ illustrations using AI tools. 

Gannett takes data from the National Weather service and creates templates that build on severe weather caused by climate change, putting each event in context with a deeper look at the issue, Davis explained. The AI experiment started with one title, The Arizona Republic.

“Each of these alert stories we see as an opportunity to help people understand context and invite them into our climate journalism,” Davis said. “We’re building a habit, mapping out the user journey.”

The Globe and Mail uses algorithms to mine themes among the top-performing headlines. According to their findings, readers are concerned with real estate, advice, and content about “rewards,” said Tracy Day, managing director for ad products and innovation.

Responding to the desires and anxieties of subscribers through advertising is critical to Day’s overall strategy of creating quality, custom-tailored ads instead of focusing on sheer quantity. Algorithmic tools are a great place to start, she said, then explained the pros and cons of using AI, as shown below:

The Globe and Mail identified the pros and cons of using bots.
The Globe and Mail identified the pros and cons of using bots.

Day’s team performed an interesting experiment to parse the possibilities of a “good bot” and old-fashioned human-written copy. The team enlisted AI to generate an Instagram ad about retirement and tested it against one created by a person.

The Globe and Mail tested human skills against AI for creating an Instagram post. The human-written post (left) performed better than the post written by the bot.
The Globe and Mail tested human skills against AI for creating an Instagram post. The human-written post (left) performed better than the post written by the bot.

The human-made content (on the left) had a 33% higher CTR and twice the engagement. The direct address of the reader was not something AI could create, suggesting a person’s touch can never truly be replaced by machine learning. 

What can and can’t AI do? Marcelo Benez, chief commercial officer of Folha de São Paulo, said bots can assist with:

  • Audience segmentation.
  • Optimisation of budget and auction offers.
  • Personalised ads and images.
  • Data analysis.
  • Virtual assistants and chatbots.

But, he presented a caveat: “The human role is essential, at least in the beginning and end of the process.” Folha de São Paulo has quickly adopted radically new forms of technology in its advertising strategies, such as orchestrating live broadcasts in the metaverse and Web3.

Ippen Digital, the largest platform for regional news in Germany, has created five editorial principles for the responsible integration of AI into its CMS, Alessandro Alviani, product lead/natural language processing, explained during the Congress’ Smart Data Workshop. They range from transparency toward editors and readers to compliance with editorial values and standards.

Ippen has a new generative AI team starting this month, including members from editorial and tech: “This will speed up the processes and product development, inproving all prompts in terms of large language models, experimenting, and evaluating,” Alviani said. 

An unexpected benefit: The value of trust is on the rise

Smart Data Initiative Lead Ariane Bernard reminded workshop attendees that keeping humans in the loop is vital to media’s success with AI, which hinges on the trust of readers.

“Our main KPI should always be trust,” she said. “That is ultimately why people care about what you have to say rather than some dude on the Internet. It’s very easy to lose the trust people have in us and very hard to gain it back.”

Among all the awe and fear in response to the last few months of AI’s rapid development, New York Times Publisher A.G. Sulzberger and News Corp’s Thomson both echoed an unexpectedly optimistic sentiment. As accurate content will be increasingly difficult to verify, trusted news sources will be more vital than ever. 

“AI is almost certainly going to usher in an unprecedented torrent of crap to use the scientific word,” Sulzberger joked. “The information ecosystem is about to get much, much worse… . I suspect we’re going to need to use [news] brands as proxies for trust.” 

The fake puffy-coat Pope illustrates readers can't always believe what they see.
The fake puffy-coat Pope illustrates readers can't always believe what they see.

Gert Ysebaert, CEO of Mediahuis Group, agreed: “The real opportunity for us is that we’re not in the content business. We’re in the journalism business. In a world where there will be so much content made by AI, people will look for what the human view is, for the human touch, and I think they will continue to value that.

“We can differentiate ourselves with trust. People will be willing to pay if they are engaged and if they trust us. But if people don’t understand journalism well enough, we have to explain better. If we want to make a difference, we have to explain what we are doing.”

“These AIs — the machines themselves — are very capable of radialization of individuals,” said Henry Times, co-author of New Power. “If true, that collective experience actually collapses and everybody has an individual experience, which gets to a world some very poor results.

“What does that mean for the defenders of truth? They become huge power institutions in this world — the collective power in the room.”

The cons: Misinformation, lack of guidelines, errors and biases, transparency

Thomson sees generative AI as a real threat to journalism in three main ways:

  • Content is being harvested and scraped to train AI engines.
  • Individual stories are being surfaced in specific searches.
  • Original journalism can be synthesised and then presented as distinct in the form of “super snippets.”

“These contain all the effort and insight of great journalism, but they’re designed so the reader will never visit a journalism Web site, thus fatally undermining that journalism,” he said.

Other concerns about AI discussed at World Congress include:

Lack of guidelines: At this point, there are no clearly defined principles surrounding AI yet, and Thomson doesn’t believe government regulation will come any time soon. That’s why it’s crucial for journalists and media companies to stay vigilant now in advocating for their own interests — so they are paid fairly for original content.

“One of my concerns is that AI could become the preserve of techies rather than the domain of us all,” Thomson said.

Most of the industry is well-versed in the knowledge that ChatGPT was trained on content published before 2021, much of it from news media companies. This brings up two considerations:

  1. Are news media companies going to be compensated for that? The content started with them yet nothing about ChatGPT directs users to the original source of the content (ie, a media Web site), taking monetisation off the table. “I think we ought to get paid,” said Martin d’Halluin, deputy general counsel at News Corp.
  2. News media companies and users deserve the transparency of original sourcing ChatGPT information. 

“Without our content with generative AI, there is no content,” d’Halluin said. “Innovation is key. We have to look at protecting the content and understanding why it was copyrighted in the first place.”

Spread of misinformation: It will be even more difficult with AI to sift through the barrage of misinformation, exacerbating the ongoing war on truth. 

Addressing the audience at his discussion on technology and journalism, CEO of The Atlantic Nick Thompson offered an humorous — and illustrative — example of what AI tools are capable of creating. 

A video appeared on screen of Thompson talking to the camera. Suddenly, different celebrities’ faces superimpose themselves onto his own as he says the following words: “Do you think it’s getting easy to use AI to superimpose faces on each other? AI was trained on human images. It’s been designed and built to replicate the way humans think. And it turns out this is a pretty simple proposition.”

The quality of the video was crude, but the sophistication of these tools are rapidly progressing and are increasingly available to the public. In the future, the power of the deep fake will be a force to be reckoned with. 

Karen Silverman, CEO and founder of the Cantellus Group, is a technology governance specialist with expertise in AI and a member of the World Economic Forum’s Global Future Council on the Future of Artificial Intelligence.

The concerns she hears from journalists echo some of those she hears from leaders in banking and finance about security and automation but media is in a special position with particular responsibilities: “The advanced technologies that we are producing today put extreme pressure on distinguishing between data, information, and knowledge.

“There is tremendous worry about the dangers generative AI poses to the work of journalists. Not only is AI capable of generating content that can compete with the work of actual journalists, but it can also be used to disseminate misinformation and create deep fakes that are difficult to recognise. We’re not equipped for a world in which we can’t rely on what we see and hear.”

Deep fakes and misinformation pose particular challenges, and it’s incumbent on journalists to work hard to discern what the truth really is. Strong newsroom leadership will be crucial as this issue continues to shift. 

Errors and biases: Small errors in a data set can have big repercussions — depending on how the outputs are used — and it will largely be up to journalists to root out the source, Silverman said.

Biases are also inherent in the information that trains generative AI. Most models were trained overwhelmingly on Western data, which is a huge part of the problem. AI is also drawing on years and years of information that is filled with bias. 

 

Transparency: “The key is to augment journalism — not replace it — and to be transparent to readers when using it for things like summaries,” Ysebaert said. “It’s about trust and keeping the human in the loop. The editor-in-chief is still the one who is responsible for what is published. 

“One thing we did at Mediahuis is to make a framework for our newsroom for using AI in an ethical way. We developed a document with seven simple principles to act as a manual. Our editors-in-chief like it a lot. They made it themselves and they really needed it. So much is coming at us so quickly. We need to make priorities and we need to do it in a controlled way.” 

About the Authors

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.
x

I ACCEPT