BBC, New York Times, Austrian Press Agency share thoughts on AI ethics
Conference Blog | 07 November 2023
As generative AI continues to become part of the news media landscape, many news publishers are nervous about how to use it responsibly.
During the recent INMA Smart Data Master Class, media leaders from BBC, Austrian Press Agency, and The New York Times shared thoughts on ethics and best practices around AI experimentations that can help preserve readers’ trust.
BBC chooses transparency with AI
Laura Ellis, head of technology forecasting for the BBC, has studied what AI can do for news media companies now as well as what it means for the future of journalism. There must be a foundational system of ethics to ensure companies can maintain readers’ trust as they experiment with AI.
For the BBC, this means talking with audiences openly about the use of AI.
“For all of us in this industry who really have trust as a major issue, talking to our audiences about this is really key,” Ellis said. “Not only asking them what they think about generative AI and trying to get them to understand it through our broadcasting and our publishing — it’s really important to disclose to them how we’re doing this.”
Ellis pointed to the 2022 Reuters Institute Digital News Report that showed how significantly trust in media has declined. In the U.S., for example, only 26% of people trust the news most of the time.

“This is a very worrying trend and we don’t want to do anything to make this worse,” she said. “It’s worth thinking about where we sit in the context of trust when we look at generative media because we really can’t afford to take any further hits.”
One challenge for news media companies is that while AI allows them to do things at scale, the volume of material it produces is so great that not everything can be reviewed and verified by humans. And that could compromise the company’s integrity: “How do we do that and make sure that we are not seeing things going out that are not trustworthy?”
One solution has been for companies to use AI as a tool to, say, create multiple headlines and let the human editors choose which one fits best. Another practice is for human editors to be able to review outputs before they are published, but Ellis said he has “only heard of one or two organisations that are literally feeding in lots of data and having stuff going out to audiences mediated. It’ll be very interesting to see where that practise goes.
Austrian Press Agency starts AI Task Force
At the Austrian Press Agency (APA), deputy editor-in-chief Katharina Schell represents the newsroom in the company’s AI Task Force, established earlier this year.
Schell shared the responsibilities of the Task Force:
- Refining the global strategy on generative AI.
- Overseeing and categorising AI advancements relevant to the information industry.
- Outlining use cases and analysing feasibility for AI’s integration into APA’s products.
- Consolidating AI activities within APA for uniform internal and external communication.
- Building a shared knowledge base, language rules, and AI usage guidelines.
- Running the APAcademy, offering continual AI training and productivity assessments.
Schell said the final two tasks, which further APA’s mission of “Trusted AI,” are essential for reliable journalism and content processes. APA’s values underline independence, fact-based reporting, innovation, and a profound understanding of their tools’ capabilities and limitations.

“These values are timeless in one way and extremely contemporary at the same time,” Schell said. “Because if you look at these concepts, they can be applied quite excellently to the current AI discourse.”
The New York Times chief data scientist looks at AI’s broader issues
Beyond addressing concerns around transparency and accuracy to maintain reader trust, publishers should be aware of the challenged posed by AI that are inherent to the technology itself.
Chris Wiggins, chief data scientist at The New York Times, recently delved into the evolving landscape of AI and its historical context. His insights, drawn from his book How Data Happened: A History From the Age of Reason to the Age of Algorithms, co-authored with Matthew Jones, explore the multifaceted nature of AI, encompassing its functional, critical, and rhetorical capabilities.
The book delves into the broader ethical challenges posed by AI, moving beyond privacy to issues like algorithmic bias. Wiggins and Jones discussed the efforts in auditing algorithms for bias, a topic gaining traction within the tech community, as seen in platforms like GitHub.

“GitHub is full of people doing their own independent analyses and sort of auditing algorithms to find out how we understand and how we quantify the bias and algorithms,” Wiggins said.
Wiggins and Jones also explore the principles of ethics in AI and their historical context, acknowledging the battle for data ethics is far from resolved. They emphasise the need to understand the intertwining of power, ethics, and technology in shaping our interaction with AI.








