BBC, Schibsted share early tea leaves about GenAI in the industry
Generative AI Initiative Blog | 03 March 2025
At first glance, it might look like we are starting to see a bit of a backlash against GenAI.
BBC research
Consider this: The British Broadcasting Corporation put out a 24-page statement on research it has undertaken pointing out that GenAI assistants such as ChatGPT, Copilot, and Google Gemini often get facts wrong or make things up: In its experiment, 51% of all AI answers to questions about the news were judged to have significant problems.
I realise the possibility of “hallucination” by GenAI assistants is not breaking news for anyone who reads this newsletter. Indeed, some members of the INMA community have been in touch to point out to me that some inaccuracies could well have been prevented because the prompts the BBC used for its experiment were very basic.
I have also had great conversations in the past with innovative colleagues at the BBC who have reminded me the Beeb is a great believer in using GenAI responsibly — and indeed that they believe it is vital they experiment with GenAI and are transparent with their audience about this.
Why, then, would they publish this paper? I believe it is because they are concerned about consumers getting their news from answer engines by typing in basic queries like the ones they used, and they are also worried about the erosion of trust.
It is also a reminder that GenAI is a black box right now, according to Peter Archer, BBC programme director of GenAI.
“Publishers like the BBC should have control over whether and how their content is used, and AI companies should show how assistants process news along with the scale and scope of errors and inaccuracies they produce. This will require strong partnerships between AI and media companies and new ways of working that put the audience first and maximise value for all. The BBC is open and willing to work closely with partners to do this.”
Schibsted-OpenAI partnership
Speaking of which: Scandinavian news publisher Schibsted has signed up as a partner with OpenAI.
“Although AI sometimes arouses skepticism, the reality is that AI-generated content is no different from other content in one crucial aspect — we are always responsible for what we publish, regardless of how it has been produced,” said Martin Schori, deputy publisher of Schibsted’s Aftonbladet.
“We want our journalism to be where the audience is. This agreement ensures that our content is visible and that the source — Schibsted’s media house — is always clear. If even more people start using the chatbots and we are not there, this could, of course, threaten our position as Sweden’s leading media house in the long term.”
The real threat may come from elsewhere, though: Research shows GenAI in the workplace is actually making us less capable.
Microsoft research
A study by none other than Microsoft finds GenAI can improve worker efficiency but also inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving.
“Higher confidence in GenAI’s ability to perform a task is related to less critical thinking effort. When using GenAI tools, the effort invested in critical thinking shifts from information gathering to information verification; from problem-solving to AI response integration; and from task execution to task stewardship. Knowledge workers face new challenges in critical thinking as they incorporate GenAI into their knowledge workflows.”
Want more? How about this study by Gartner that points out by 2028, almost one-third of sales staff entering the workforce will lack critical sales skills due to an overreliance on AI technologies?
“As sales organisations increase their interest and dependence on AI-enabled technologies, there will be a rapid decline in seller’s analytical as well as social skills, such as effective communication, which are essential for relationship building with customers,” Gartner wrote.
All three of these pieces of research are a worthwhile reminder that GenAI is good at some things — such as brainstorming, translation, converting text to speech, or performing repetitive tasks like keywording — and should not be blindly relied upon for other tasks. Let’s keep our wits about us as we navigate this.
If you’d like to subscribe to my bi-weekly newsletter, INMA members can do so here.