News organisations embrace codes of conduct, use of AI

By Peter Bale


New Zealand and the U.K.


Journalists are clearly embracing generative AI, and newsroom leaders are in many cases keen to explore ways to increase productivity with it. But it is equally clear editors and business leaders need to communicate guidelines on its use to staff and readers.


Mediahuis Chief Executive Officer Gert Ysebaert told a Congress session his company had published guidelines on how to work with generative AI and how to communicate that transparently to readers with a strong focus on using it to support quality journalism.

Mediahuis Chief Executive Officer Gert Ysebaert (right) at INMA's World Congress of News Media in May.
Mediahuis Chief Executive Officer Gert Ysebaert (right) at INMA's World Congress of News Media in May.

“It brings huge change,” Ysebaert said. “The way I look at it now, it will make or break the newsroom. We have to embrace it. How can we implement this very fast and do this right … mitigate risk? In the short term, (we) make a framework for our newsrooms. How can we use AI in the newsrooms in an ethical and responsible way.”

The Mediahuis guidelines were built on seven principles:

  1. Augment not replace.
  2. Transparency above all.
  3. Humans in the loop.
  4. Be fair, without biases.
  5. Prioritise privacy + security.
  6. Trust is key.
  7. AI skills training.

Financial Times

Almost as he was speaking, Financial Times Editor Roula Khalaf published her own open letter on how the FT intends to approach and embrace AI while stressing that “FT journalism in the new AI age will continue to be reported and written by humans who are the best in their fields and who are dedicated to reporting on and analysing the world as it is, accurately and fairly.”

Khalaf noted AI “has the potential to increase productivity and liberate reporters’ and editors’ time to focus on generating and reporting original content.”

However, she also talked of its propensity to “hallucinate” and the risks of the technology being used to spread misinformation and to erode rather than increase journalistic trust.

The FT would evaluate and adopt AI that supported its business objectives and commitments to editorial innovation but would do so transparently and carefully, she said.

“It is important and necessary for the FT to have a team in the newsroom that can experiment responsibly with AI tools to assist journalists in tasks such as mining data, analysing text, and images and translation,” she said. However, she pledged the FT wouldn’t use photorealistic imagery from AI and would alert readers when it incorporated AI into other forms of storytelling.

“The team will also consider, always with human oversight, generative AI’s summarising abilities,” she wrote.

KSTA Media 

Perhaps the most striking use case in the Congress for me was how much 400-year-old German media company KSTA Media has embraced generative AI in the newsroom and across its published products, emphasising adoption, experimentation, and transparency.

“We want to become an AI company,” KSTA Chief Executive Thomas Schultz-Homberg told the Congress. “This business is really under fire (and) one reason is AI.”

Thomas Schultz-Homberg, CEO of KSTA, speaking at the INMA World Congress of News Media.
Thomas Schultz-Homberg, CEO of KSTA, speaking at the INMA World Congress of News Media.

Media, he said, had missed earlier “trains” in technology, and he was determined to meet the challenge of AI by embracing it not resist it, saying: “Now as an old white man I don’t want to miss the last one, and I am very determined to get this train.”

KSTA was making AI a regular part of jobs in the newsroom and across the business. New topic pages were generated and assembled by natural language processing and had already improved search engine discovery. And content recommendation modules on KSTA sites had been moved to AI-driven sets after showing they performed better than those curated by editors. 

At the business end, KSTA was adopting AI-influenced dynamic paywalls and pricing models to ensure that it responded to opportunities to capture new subscribers.

Schultz-Homberg said his newsroom had been “anxious” but that after respectful talks and much testing, journalists had been ready to experiment in what he described as a balance between AI being a helper rather than a creator. But, he said, KSTA would be “bold” in adopting AI and was prepared to “learn that while practicing.”

For more on generative AI, see the INMA report News Media at the Dawn of Generative AI by my INMA Smart Data Initiative colleague Ariane Bernard. There’s also an INMA Knows section on AI, which will constantly update with what we all write about this subject. 

If you’d like to subscribe to my bi-weekly newsletter, INMA members can do so here.

About Peter Bale

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.