Aftonbladet has created a short-term AI innovation, efficiency team

By Ariane Bernard

INMA

New York City, Paris

Connect      

Hi everyone.

This week, I chatted with the good folks at Aftonbladet in Sweden to hear how they were embarking on their AI cross-functional team. A big mandate, a limited amount of time, a diverse set of folks — it’s off to the races, and I hope you find this inspiring in your own organisations.

Until the next one, all my best,

Ariane

Aftonbladet forms team for six-month AI project

We’re coming to a year since the date when OpenAI’s ChatGPT became available to the public, and a lot has happened since then. Billions were raised by venture capital (that actually is the most significant part); new chatbots also came online; LLMs (large-language models, the underpinning technology of AI chatbots) have multiplied; and the ink has flown freely with headlines ranging from fawning futurism to AI doomsday. 

What a ride. I’m exhausted.

All in all, a great year for fans of technology news. From our corner of the world, we’ve also advanced from an era of strategic memos to AI ethic charters. And now, we’re entering a new phase where many publishers have begun to roll out new GenAI-powered tools to their internal teams and are scaling up their effort to test the adoption of GenAI in new parts of the business. We’ve certainly seen a few of these excellent examples at INMA’s Smart Data Initiative Master Class series, which just wrapped up last week. 

One way that I am seeing publishers scale up their adoption of AI throughout the organisation is with the set up of cross-functional teams and task forces. Just a couple of weeks ago, the daily Aftonbladet in Sweden (part of the Schibsted group) announced it had formed a new team of nine from across their organisation to examine how and where to bring AI to enable innovation and efficiency in all corners of the company.

I reached out to Moa Gårdh, the director of product and UX for Aftonbladet, who is also representing the product side for this team, to ask about how they were embarking on this adventure and where they were directing their efforts.

Building the team and getting started

The cross-functional team at Aftonbladet actually stemmed from an earlier initiative that started in the spring, which leveraged several themed workstreams. 

“We invited the entire organisation,” Moa said. “We had seven different workstreams. Some went for content, some went for user experience, some for visuals.” But while the workstreams allowed for a lot of ideas to be expressed, this couldn’t quite scale in duration since folks were doing this on top of their regular jobs.

The idea of a cross-functional team was born precisely so actual, dedicated focus could be enshrined in people’s responsibilities, but it is meant to be a time-boxed exercise. And the members of the team applied internally to join the task force, creating a group that has members of the newsroom, a print-focused specialist, a podcast specialist, two developers, a UX designer, and a product lead. 

At the moment, Moa said, this is meant to be a six-month project for the nine team members. It could be extended, but the assumption is that the team has six months. Just as an element of scale, Aftonbladet has between 250 to 300 employees, Moa said.

“The team is going to get started by interviewing different departments across the organisation to see what kind of problems they are having or what kind of KPIs they need help with,” she said. “And then make an assessment of, like, do you think this is something we could solve quickly? Is this a low-hanging fruit or is there a tool we could just use like ChatGPT — or that we could train and we train them and that’s the only thing that’s required. We could build tools, but there is also a lot of stuff to buy.”

This is something that returned a few times in our chat, this focus on leveraging tools that already existed, including commercial tools, rather than defaulting to an assumption for internal development. This is a smart position in general: Assume your problems aren’t very different than other organisations and that there are probably some tools out there for this problem space. 

But even more so with AI: To do anything useful at all, you need a lot of development, a lot of data, a lot of computing power. If you think a paywall, for example, is complicated to build, consider that it is orders of magnitude more straightforward than a LLM.

While some publishers build their own paywalls, many more buy from vendors — that should absolutely be the assumption with AI. Even if you do end up customising something, the approach that Moa and her team are starting with — which is to leverage technology that has already been built and tools that have already been tested — will basically ensure their time to market is something workable and the costs of this remain sane.

The new team at Aftonbladet was created to work within a six-month time frame.
The new team at Aftonbladet was created to work within a six-month time frame.

I asked Moa what were the team’s high goals and what were the success metrics of her teams. 

For high goals, “We want the organisation to be more innovative and efficient. We want to focus on more in-depth journalism, and be more creative, instead of worrying about things like SEO. We want to get rid of stuff like that,” she said. But success lies elsewhere: “Everyone needs to be on board. We don’t want to leave people behind. We want to measure both where we start and where we end up.”

There are three main areas where Moa sees the team focusing their investigation because they are ripe for AI to support. One is end-user focused: “There are a lot of people in Sweden who don’t speak Swedish … . So how can we reach new target groups?” Moa asked.

And two others are internal uses. One is around certain efficiencies in the newsroom: “good titles, timelines for stories, and helping journalists work through large documents.” The other internal use is oriented at the business side: “We do a lot of user interviews,” Moa said. “It takes a lot of time, and we feel we can work with this much more efficiently to extract insights. But we also want to do a database of all these insights and make them accessible beyond the product/UX team, via a chatbot.”

Sizing possible impact 

Any project that has a strong focus on discovery has a harder time sizing the opportunity they are shooting for. Logically, this makes ROI harder to predict. I asked Moa if she felt there was a way to project the size of upsides. Moa had an interesting approach to find a proxy for the possible upsides to productivity they could hope for this round of AI improvements.

“When I was working to convince my boss, I tried to bring evidence that we would be more efficient. I spoke to my developers who are working with AI all the time, like folks who use [coding] co-pilot features. They said, ‘I’m 15% more efficient with AI.’ And the Boston Consulting Group is A/B testing groups working with AI versus not, and what they are saying is that they are not just working faster, but they are also coming up with better ideas.”

In the end, presenting to her manager, Moa used that 15% figure as an example. “What happens if we became 15% faster or 15% more innovative, or if we grow our traffic 15% because we now serve these underserved segments. That’s a lot.”

Further afield on the wide, wide Web 

Some good reads from the wider world of data. You’d think I would try to group these items by theme, but I want you to experience the wild trips I take around the Internet for you.

  • It’s machine learning but it’s not generative AI: The New York Times shares some of its learnings from building a personalised homepage in its NYT Cooking app. It’s in part using vectors to detect similar recipes and in part collaborative filtering. (NYT Open Blog)
  • A new large language named Latimer is being dubbed the “BlackGPT” for having been trained with special care for inclusivity and extensive background with racially diverse content than other models. It is based on Meta’s Llama, which is open source. For media companies, there is much to look at in comparing outcomes across different large language models and figuring out what LLM is good for what task is likely to provide paths to improve the outcomes. (POCIT, People of Color in Tech)
  • My people, the Swifties, ate good this month of October — a new movie, an album rerelease — but also there’s a duet of Harry Styles and Taylor Swift made by AI that’s making the rounds on TikTok. I won’t share this because I see the editor sharpening her knife nearby BUT this is all to say that my favourite AI-related topic, which is the questions of rights and authorship, is ever so burning. Over at Digiday, a roundup of several recent cases of deep fakes and concerns from rights holders over AI — whether used in the context of advertising (fake endorsements from famous people) or to train a system in creating advertising. (Digiday)
  • Three academics debunk that generative AI is making the quantity, quality, and personalised level of misinformation more abundant or effective. Sacha Altay of the University of Zurich, Hugo Mercier of the CNRS of France (and various other affiliations), and Felix M. Simon of Oxford University report what they have observed of actual misinformation powered by AI in a paper published by the Harvard Kennedy School.  (Shorenstein Center on Media, Politics, and Public Policy)
  • Generative AI’s Act 2.” Great foundation-type paper for us all — coming from a source with a specific vantage point, shall we say. The venture capital firm Sequoia reviews the field of generative AI — both the business of it but also the use case and how the market has made movements toward its use. It’s also full of numbers if you’re trying to make a case to your boss. (Sequoia Capital)
  • Non-journalism but interesting AI-science news, Reddit delivers as it often does: Courtesy of r/science, “a 2000-year-old practice by Chinese herbalists — examining the human tongue for signs of disease — is now being used with machine learning and AI. It is possible to diagnose with 80% accuracy more than 10 diseases based on tongue colour. A new study achieved 94% accuracy with three diseases.” (University of South Australia). Meanwhile, the NYT reports on new technology that allows surgeons to use AI while operating on people’s brains to determine how much of a tumor to remove. (NYT)
  • Yle, Finland's national broadcaster, released its principles for the responsible use of AI. The first one is that there is always a human in the driver’s seat. (Yle)
  • Generative AI enters the collection at the Museum of Modern Art in New York: The artwork “Unsupervised” by Refik Anadol, which had been on display for a few months (a mesmerizing shifting thing), was acquired by the museum. (MoMA, Artist statement)

About this newsletter

Today’s newsletter is written by Ariane Bernard, a Paris- and New York-based consultant who focuses on publishing utilities and data products, and is the CEO of a young incubated company, Helio.cloud

This newsletter is part of the INMA Smart Data Initiative. You can e-mail me at Ariane.Bernard@inma.org with thoughts, suggestions, and questions. Also, sign up to our Slack channel.

About Ariane Bernard

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.
x

I ACCEPT