Ringier leverages AI benefits while minimising its risks with new guidelines
Ideas Blog | 20 September 2023
Since the end of 2022, it has been clear: Generative AI has set a powerful wave in motion. Once again, the media industry is among the first to be hit by the wave. This time, however, we don’t want to be caught up in it. This time, we want to ride the wave.
This quote by Ladina Heimgartner, head global media Ringier AG and CEO Blick Group, points out three things:
- At Ringier, we believe that generative AI will have a significant impact on the media industry.
- We see a big positive potential of generative AI for our journalism and our business.
- We are also very conscious of the responsibility that comes along with this new technology, especially when it comes to transparency, journalistic quality, and the trust our readers and users put in us.
A map for unknown territory
As an international media and tech company based in Switzerland — with more than 100 companies in media, sports media, and digital marketplaces in 19 countries — Ringier wants to make sure that our core values are met when exploring unknown territory. Those values include independence, freedom of expression, and a pioneering spirit.
In spring 2023, we started working on our group AI Guidelines, based on a collaboration by business, legal, and data protection. In May 2023, we introduced our guidelines at the presentation of our annual report to all our employees.
For Ringier, explaining the company’s relationship and engagement with AI is important internally and externally.
We want to act responsibly by transparently disclosing the use of AI. And we want to minimise the risks when dealing with AI in terms of legal questions, data privacy questions, and reputational questions.
Key guidelines
The Ringier guidelines for dealing with generative AI include the following points:
Responsibility (“human in the loop”): We are fully responsible for all content that we produce with or without generative AI; therefore, everything has to be verified, checked and supplemented using the company’s own judgement and expertise based on our journalistic standards.
Labeling: As a general rule, content generated by AI tools shall be labeled. Labeling is not required in cases where an AI tool is used only as an aid.
Confidentiality: Our employees are not permitted to enter confidential information, trade secrets, or personal data of journalistic sources, employees, customers, business partners, or other natural persons into an AI tool.
Codes: Development codes will only be entered into an AI tool if the code neither constitutes a trade secret nor belongs to third parties and if no copyrights are violated, including open source guidelines.
Bias: The AI tools and technologies developed, integrated, or used by Ringier shall always be fair, impartial, and non-discriminatory. For this reason, Ringier’s own AI tools, technologies, and integrations are subject to regular review.
Just the beginning
It is already clear this is just the first version of the guidelines. With the rapid development of generative AI technology, the guidelines will also evolve over time. This is why they will be continuously reviewed in the coming months and adjusted if necessary.
To sum up, at Ringier we are convinced that through the responsible interaction between humans and machines, based on clear AI guidelines, our products, processes, and content can be further improved.