Maybe AI transparency shouldn’t be so consequential in media
Ideas Blog | 21 April 2025
If you ask people whether they want the media to use AI, they’ll probably say no — or at least hesitate.
That applies to both journalists and our audience. But if you give the audience a really good AI-powered product, like Aftonbladet’s Election Buddies, they will use it. They might not love it, but they will use it (a lot, in fact).
Similarly, if you provide journalists with a tool that saves hours of monotonous work — like transcribing interviews — they will use it. (And maybe even love it?)
So, it’s not the technology that determines acceptance but the value it creates. Yet, much of the AI debate revolves around principles and policies rather than practical use. This is especially evident in discussions about AI transparency.
Does transparency really matter?
Over the past year, I’ve attended several training sessions, conferences, and lectures on AI, and almost every time, we are told that it is crucial for news organisations to inform the audience when generative AI has been used.
But why is that important?
From my perspective, the audience doesn’t really care that much.
Take this example: If a local newspaper launched an amazing interactive restaurant guide in the form of a chatbot tomorrow, or introduced a hyper-relevant personalised newsletter generated with AI, many of its readers would likely appreciate it.
The questions the audience might have about these products are:
- Is the information accurate?
- Is it interesting and relevant?
- Is it presented in a simple and user-friendly way?
The question that few would ask: How did these numbers and letters end up on my phone?
At least in Sweden, where Aftonbladet operates, media outlets are responsible for all content published on their platforms. That should be enough. The point of distinguishing between different types of content has become, at least to me, increasingly unclear. Perhaps it’s even harmful.
Many people are sceptical of AI technology. Maybe they’ve tried searching for their name in ChatGPT and received bizarre responses. Some media AI experiments have been poorly executed and widely criticised.
Aftonbladet has invested significant resources in experimenting with and implementing AI solutions. We’ve built internal tools as well as a range of consumer-facing products such as chatbots, text-to-speech, and article summaries.
Last summer, we tested AI-generated summaries of a highly popular Swedish radio format, where well-known figures share their life stories. The idea was solid — we wanted to capture the SEO traffic that always spikes when these programmes air — but the results were harshly criticised.
The summaries were awkwardly written, and sometimes ChatGPT (which we used) misunderstood key points. Over time, the summaries improved, but that doesn’t matter.
The point is that, of course, it was our fault — Aftonbladet’s fault — that we launched a product that didn’t meet our standards.
It doesn’t matter if it was written by a human, an AI, or a monkey with a typewriter; it’s still the newsroom’s responsibility. Not some vague entity called “AI.”
The problem is not how the words got onto the screen. The problem is what they say.
Taking responsibility
As my colleague Agnes Stenbom recently pointed out in a discussion on this topic, AI labelling has become a way for media organisations to deflect responsibility: “It’s not our fault that this piece of content sucks; it’s AI’s.”
This is problematic for several reasons. First, it undermines audience trust in the content. Second, it risks creating a two-tiered quality standard where AI-labeled content is implicitly held to a lower standard.
There may be exceptions — for instance, with images and video (our guidelines prevent us from altering news images, whether done with AI or any other technique). But for written content, the reality remains the same:
Media organisations must produce truly great journalism if our products are to survive — and we must take responsibility for what we publish. That is true whether the information was generated by a language model, sourced from Google, or written using the language skills we learned in fourth grade.
If we fail and give the audience subpar material, it’s not AI’s fault. It’s ours.