Where do ethics fit in an AI-driven newsroom?

By Rianette Cluley

Briefly News

South Africa

Connect         

Deepfakes, digital hallucinations, and misinformation directly threaten the journalism industry, which relies heavily on its readers’ trust. Unfortunately, ethical frameworks for the use of AI in newsrooms are in their infancy stages globally.

But one thing is certain: News organisations should develop their policies and best practices rather than wait for global guidelines.

Fanie van Rooyen, a research fellow at the Department of Journalism, Stellenbosch University, wrote his PhD thesis on how journalists and scientists ought to best communicate about potentially disruptive emerging technologies like AI in the public sphere.

“The ethical use of AI in newsrooms is becoming increasingly important since it is becoming harder and harder to sift fact from fabricated fiction on the Internet,” he said. “As such, all newsrooms should make it a priority to decide on policies and best practices for journalists when it comes to the use of AI technologies in order to safeguard the media’s sacred role of upholding truth.”

There is no denying AI will play a role in newsrooms. To be proactive about how these tools should be used, media companies need to develop clear policies.
There is no denying AI will play a role in newsrooms. To be proactive about how these tools should be used, media companies need to develop clear policies.

In the newsroom where I work, we use AI to streamline tedious tasks, which allows writers to focus more on the creative aspects of their reporting.

However, many journalists see AI only as a threat to their jobs instead of a tool that can lighten their workload. Many of those also express concern over becoming too reliant on AI.

For my part, I believe the opportunity lies with focusing on the boundless possibilities AI offers journalists, who are always chasing the next big story or trend, rather than fixating on the potential pitfalls or threats.

Ethical use of AI and setting up clear policies on the use of AI in your newsroom

Google is clear about the use of AI with the sole purpose of ranking higher on its platform. It is passionate about giving its users people-first content.

Google, in its spam policies, states: “Using automation — including AI — to generate content with the primary purpose of manipulating ranking in search results is a violation of our spam policies. The site can be downgraded or banned.”

At my organisation, we adopted the same stance. In our AI policy document, we clearly set out how our contractors can and cannot use AI and the ramifications should they violate the policy. Our policy is that no content should be created, not even partially, using AI. At the end of the day, news written by real people resonates more with readers.

At Briefly News and Legit, we use AI to assist with finding credible sources of information, verifying facts, finding experts to contact for comments, follow-ups, and idea generation, among other purposes. Using AI tools for these tasks can free up a ton of time for writers to focus on what they love doing: writing.

However, as per internal policies that we have drafted, this use is possible when human intervention is happening.

Potential risks of using AI in the newsroom

There are many pitfalls to consider when using AI technologies in the newsroom, van Rooyen said: “The most obvious is, of course, ‘AI plagiarism.’ Chatbots like ChatGPT and Gemini have made it incredibly easy for anyone, including journalists, to instantly whip up a news article on any topic and make it sound truthful, even if not based on real facts or evidence. Fake news, therefore, has become much easier to produce.”

Unethical journalists can easily create questionable or wholly false news reports, or lazy journalists could unwittingly use false information an AI programme provides if they don’t double-check the facts, van Rooyen added: “Taking it a step further, with text-to-image AI generators like Dall-E, Stable Diffusion, or Midjourney, text-to-video generators like OpenAI’s Sora, and Deepfake technology, it will soon become nearly impossible to distinguish real images and video from AI-generated content.”

Global outlook on AI and journalism

A number of global news organisations are researching the global impacts generative AI will have on the industry. Last year, The International Consortium of Investigative Journalists (ICIJ) revealed it joined an international committee to develop guidelines around the use of Artificial Intelligence in media.

Professor Rasmus Kleis Nielsen, director of the Reuters Institute, said in an article titled “How the news ecosystem might look like in the age of generative AI,” that, in the near future, generative AI won’t be as game-changing as everyone’s making it out to be. “It looks a lot like a bubble, and bubbles eventually burst,” he said.

But Nielsen said even when this hype eventually dies down, something will be left behind, “especially for publishers who double down on what makes them different.” AI can help journalists exercise editorial judgment, fact check, and verify claims quicker, among other things.

“I think that’s where the long-term opportunity lies — at the intersection between the timeless journalistic aspiration to seek truth and report it, the constantly evolving set of tools and technologies that can help journalists do that and can help people engage with journalism, and the enduring public desire to make sense of the world and what happens in it,” Nielsen said.

Increasingly, AI tools will be created to simplify a journalist’s work, but journalists should keep in mind they are responsible to the readers and must adhere to journalistic ethics while shaping the final product.

About Rianette Cluley

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.
x

I ACCEPT