Schibsted has been looking for ways to leverage its data to provide more value to its customers. Over the last two years, the company has worked to prove the hypothesis that there was unused potential in the data it was collecting from users across its 55 different brands.
Agnes Stenbom, responsible data and AI specialist, said Schibsted put together a data collaboration team of four people, or enablers, to construct mandatory building blocks of providing responsible data practices throughout the company.
Schibsted as the controller. Stenbom said it’s important for customers to know Schibsted is responsible for collecting their personal data and holding it safe.
Schibsted account. The company also wanted the same login system across all its brands. The benefits of this are more than just making it easier on the user, Stenbom said: “We also get to know that account better as it travels across our sites.”
Common tracking. The system, named “Pulse,” collects data in streamlined ways across the group.
Common data warehouse. Schibsted also wanted a shared way to store and manage its data.
AI is not a toy
Once the Schibsted team had the building blocks for data, they got to work on AI. Schibsted thought it important not to fall into the trap of thinking about AI as a “shiny new toy” but as something to be used to create concrete value.
“Without data, AI can’t exist and with poorly managed data, AI won’t thrive,” Stenbom said.
Schibsted approached AI not as a subfield to its data strategy but as needing its own strategy entirely. The team is currently in the training field of their AI strategy and are about to launch it internally, so Stenbom offered INMA members a sneak peek into the five areas Schibsted sees as value creation areas with AI:
Streamline and automation of work process. Stenbom said this is useful for the company’s personal financing sites where it’s able to use human resources more efficiently.
Content creation and journalistic support. Schibsted found automating paywalls and tagging, combined with several other AI functions, was far more efficient in saving journalists time and energy.
Predictive insights. Schibsted created an AI project known as its “audience targeting engine.” The project helped the company predict the age, gender, and geography of its non-login users and allowed them to personalise the site experience for them.
Personalisation and recommendation. Speaking of personalisation, Schibsted found AI useful in creating specific value for users in its marketplace and news media content areas.
Sustainability and inclusion. Schibsted leverages AI in a very interesting way in how it prints its newspapers, Stenbom said: “Every day we do a calculation of how many newspapers we are likely to sell at each specific vendor that we have and print as many papers as we will actually sell.” This helps reduce waste and unnecessary transportation costs.
Challenges of AI
Stenbom also wants other companies to be aware of the challenges of using AI in business. Some of the most notable are filter bubbles, bias in your algorithms, and loss of human jobs. Stenbom shared a case study that ultimately was not successful by Schibsted’s standards:
Schibsted was looking to innovate SEO practices at leading Swedish newspaper company Aftonbladet. The company was asking its journalists to write two headlines for each story, a journalistic one and another that was SEO-friendly. Not only did the journalists not like writing two different headlines, but the company found the task time-consuming and inefficient.
So Schibsted conducted a controlled experiment, creating a language generation model and training it to generate SEO headlines based on 3,000,000 archive articles. The model began generating headlines the way they believed it would based on the parameters the team set, Stenbom said.
“This is really exciting stuff when you’re working with something new and you’re seeing that your model is actually working in practice,” Stenbom said.
It wasn’t all exciting, though. Schibsted found that sometimes the model’s generated SEO headline got away from the original meaning of the story or had factual inaccuracies.
Schibsted considered the experiment “failed,” as it was looking for 80% accuracy in the model generating SEO but only achieved 65% accuracy. The company decided to refocus its AI efforts in other areas like contextual advertising.
Over the two years of the project, Schibsted was not as worried about the risks of AI and data as it is now. To provide accessible ways to manage risk, the company uses what it calls its “FAST” framework: Fairness, Accountability, Sustainability, and Transparency.
“If we want to capture all that potential we see ahead of us, we really need to find a way of managing these risks and ensuring that we don’t end up building systems that we can’t stand for,” Stenbom said.
Not surprisingly, Schibsted found the biggest risks are closely related to the areas where it’s growing the fastest.
“Seeking to ensure responsible practices is not just about this abstract idea of an ethical society,” Stenbom said. “Incorporating responsible AI practices into data and AI development, that’s fundamentally about building products and services people like and enjoy and find worthwhile paying for.”
This case study originally appeared in the INMA report, The Guide to Smart Data Strategy in Media.