Aftonbladet offers an “AI buffet” to its users and big wins for its corporate culture
Smart Data Initiative Newsletter Blog | 04 December 2023
Hi everyone.
It’s “Spotify Wrapped” season! And if this isn’t an awesome example of regular analytics wrapped up with pretty strings and bows and turned into a great marketing move, I don’t know what is.
Is there perhaps some cool ways you could use your own user analytics to delight and surprise your users in the way Spotify does? (Over here, it will not surprise any regular readers as to who came in as my No. 1. Yes, this is your gratuitous TSwift mention for the week.)
And also delivered with pretty strings and bows: another bit of a status report volunteered by our friends at Aftonbladet in Sweden on their AI project. It’s very exciting for someone like me who trades in large scale tooling infrastructure — where projects are many things, but hardly ever quick to roll out — to hear about these experiments with these super fast time-to-market from ideation to testing and rollout. What a time to be alive.
But there is a cultural angle to all of this, and this one is more deliberate and thoughtful: bringing these technologies to our organisations in a manner that brings everyone along for the ride.
Until next time, Ariane
The “AI Buffet” at Aftonbladet
A few weeks ago, I wrote about Aftonbladet’s cross-functional AI team, which is taking on a time-boxed exploration mission to investigate workflow improvements, and new end user experiences at the Swedish daily.
The mission is set to last six months, so I had planned to loop back with these good folks in a few months to see where things were at. I didn’t expect for Martin Schori, the deputy editor-in-chief at Aftonbladet, to drop into my inbox just a month later with outcomes and good stories.
But that’s the thing with many of these new libraries and platform tools: When things go well, the lead time between your good idea and production can be much shorter than what we are used to.
In part, this is because the AI platform is both your engineering architect, your developer, and your delivery platform. And if you’ve ever built complex applications, you know that the more stakeholders you need to take part in your project, the longer your time-to-market — regardless of how complex or simple the applications.
When I say “when things go well,” I mean not just when the AI seems to deliver something compliant and useful, but also when we have the ability to reliably test the outcome.
Case in point: the “five points bullet maker” Aftonbladet rolled out as part of its AI exploration.
“It took me about 20 minutes to put it together,” said Martin, “but significantly longer to test it and train out its bad behaviors.”
So no one would say the tool took 20 minutes to build, because iteration is labour just as conceiving of the thing itself. But any new idea that you can at least smoke test in 20 minutes is a huge improvement over the ways of the past. Many of the shots we don’t take aren’t necessarily because we end up not being interested in them, but rather because we don’t see a cheap way to smoke test them in the first place.
If AI gives us a way to launch 1,000 extremely cheap experiments — and even if we toss out 950 of them — it leaves us with more than enough good, actually smoke-tested ideas to take to their next stage.
The team at Aftonbladet also rolled out their “Youth Assistant,” which “creates timelines, fact boxes, and Q&As on any topics/stories written in Aftonbladet’s tone, with the instruction that an 18-year-old should understand the content and with clear source references,” Martin said.
This experiment illustrates two different important trendlines of generative AI in our industry right now:
The first is that while there are likely a lot of efficiencies we can hope for with AI, in its current stage and in applications in the news industry, generative AI is not necessarily saving us a lot of time/productivity in that we’re using it to investigate areas where we simply weren’t present before. Martin underscored this dimension for both the five-bullet maker and the Youth Assistant. But in both cases, there really wouldn’t be such a feature if it weren’t scalable through automation.
Another way to say this: Either we can use some technologically augmented way to build these features quickly and well — or we just don’t do it at all.
The second dimension is that we’re therefore addressing audiences that we had not been able to serve. The Youth Assistant aims to make Aftonbladet journalism more accessible to more people; the bullets have the same goal for a general audience. This is something that VG, another Schibsted publication in Norway, had found with their own bullet experiments (check out this Webinar). This feature helps more people click into stories than do when the bullets are not present. In straight marketing terms, the bullets bring extra conversions.
But not every new tool and feature is directed at end users.
Internally, the team at Aftonbladet also rolled out the “Buddy reader,” a tool meant to support the backend of making journalism with a tool that proofreads and gives feedback on sentence structure, finds repetitions, and weaknesses in reasoning.
And there are several new features that are also being offered up to the newsrooms, “a veritable AI buffet,” in the words of Martin:
Exploring AI tools as an act of cultural transformation
There is a bit of a gap between the fawning headlines you may read — the “AI-is-changing-everything” type of headlines — and the more discrete tools we’re seeing flourish in newsrooms like Aftonbladet’s. And in this, using exploration projects like Aftonbladet is doing is really just as much about the tools and technologies developed, the new workflows, the new, small features we can put in front of users as it is about bringing along organisations which are facing yet another round of disruptions.
One of the reliably hardest angles of transformation is the management of fear, and, at times, even existential fears. If I had a nickel for the number of times I’ve been asked (by publishers, as a dinner party guest) if AI was going to put all journalists out of business … . But I get it. There’s something disquieting about a tool that seems to serve certain functions that, say, a line editor would have handled just a few years ago.
Martin underlined how much joy there was in seeing folks without a technical background be able to use generative AI to accomplish tasks that previously required computing knowledge. Or getting the feedback that users loved the five-bullet features. But he also underlined how, unusually, the goal with the project was culture change.
“Usually, using a technique or technology is not the goal itself. It shouldn’t be a goal, but instead, the goal would be to create a modern and new experience. But since this technology is new, we felt that we need to experiment with it and make that a goal in itself, so to speak. Right now, at least. We want the overall knowledge of AI to improve and for everybody to be on board on this transformation,” Martin said.
Adoption of new technologies is in part based on the value of the problem being solved (the cure for cancer will sell itself). But in very large parts, in the ability of humans to metabolise the new technology rather than reject it as the perspective of change is too disruptive or scary.
Newsrooms, and news publishing in general, have been so disrupted in recent years that these teams are generally change-averse and have been made even more so by the perception of so much upheaval in our industry.
In this respect, the goal of cultural adoption for AI is probably, ultimately, the largest one served by Aftonbladet’s project. The tools are engaging, and the features for end users are well-received. This matters. But it matters even more to be able to bring the new capabilities in a manner that is humane, exciting, and safe.
Further afield on the wide, wide Web
Some good reads from the wider world of data:
- <heavy sigh>Here’s Sports Illustrated, having published some AI-generated stories by fake AI writers given pseudo-identities and getting caught doing it with no disclaimers. The parent company of Sports Illustrated said they got the articles from a third-party agency, that it’s not AI because the third party said it’s not AI, but also, removed the articles. The problem with stories like what follows is that they have the potential to cast all news media companies in the same poor light. (Futurism.com) … If you want the NYT’s take on it, there’s that too. (NYT)
- Is it generative AI if what it generates is food? A takeout restaurant delivering via the delivery apps has been using a robot arm to make the meals, undetected for months. It’s actually legal. (Eater.com)
- The team working on the Associated Press’s Local News AI initiative (which I have written about) has a new survey asking our industry questions related to generative AI. “We would appreciate your insights on how tools like ChatGPT and Midjourney might factor into your news value chain or if your newsroom forbids the use of these tools,” they ask. I’m happy to oblige by sharing the link to the survey and encouraging you to take part. (The Associated Press)
- As a recovering founder of a defunct venture-backed startup, there’s a significant part of my newsletters and LinkedIn feed that hails from the money parts of Silicon Valley. And this one is the 115-slide raft from Coatue on AI, covering everything from adoption, to projections of AI’s future progress, investments in the field. It’s definitely a very Silicon Valley VC read on things, but other than the place whence it came, it’s an excellent overview of this space. I would recommend it to the c-suite among us (and the nerds who like charts. I see you and I appreciate you.) (Coatue)
- I must be in my corporate blogs era. The British law firm Slaughter and May looking at what it means that OpenAi is offering to indemnify and cover the legal costs of paid licensed users of its product should they be taken to court over copyright infringement using OpenAI products. (Microsoft in September announced similar provisions for users of its Copilot product). The main things: There are limits to the indemnification, both in terms of legal scope and in terms of maximum amount. So, as the firm states, “So in some ways, the risk still sits with the user.” (Slaughter and May)
- This week’s FAWWW ends with the meatiest item. In fact, I had kept it from a previous FAWWW because it was a lot, but worth it. This comes from The Gradient, an academic publication about AI, but it’s not a research paper. Rather, it’s a well-researched perspective on Artificial Intelligence in the context of centuries of technical progress and increased automation. The gains we actually make and the tradeoff they imply. What we still don’t automate, despite the progress we’ve made. “Why transformative Artificial Intelligence is really hard to achieve.” (The Gradient)
About this newsletter
Today’s newsletter is written by Ariane Bernard, a Paris- and New York-based consultant who focuses on publishing utilities and data products, and is the CEO of a young incubated company, Helio.cloud.
This newsletter is part of the INMA Smart Data Initiative. You can e-mail me at Ariane.Bernard@inma.org with thoughts, suggestions, and questions. Also, sign up to our Slack channel.