KStA increases engagement from homepage with recommender system

By Robert Zilz

KStA digitale Medien

Cologne, NRW, Germany


In today’s world, we are inundated with an overwhelming amount of information and news media, making it difficult to filter through and find content that truly resonates with us.

This is where the power of recommender systems comes in.

By utilising sophisticated algorithms and machine learning techniques, a recommender system for news media can help us discover new and relevant content that matches our interests, preferences, and browsing history. With such a system, we can stay informed and engaged with the latest news and information that matters most to us, and in turn, make more informed decisions about the world around us and revolutionise the way we consume news media.

Did you realise that the above text was written by AI? These days, you can’t address generative AI without addressing ChatGPT. It pretty much synthesises the ultimate goal that we wanted to achieve with this initial project towards AI-driven recommendations: no user misses out on any article that is relevant to them.

With ksta.de, the leading regional news site for Cologne and the surrounding areas, we quickly identified the perfect case to run and test our machine learning recommendation service. Our vision was to develop a recommender capable of driving user engagement that we could place on our homepage and make available to all users — without any boundaries like logins, etc.

Regional news site KStA wanted to develop a recommender capable of driving user engagement from its homepage.
Regional news site KStA wanted to develop a recommender capable of driving user engagement from its homepage.

This had to be done within a phase of massive changes within the product itself, including a relaunch and implementation of a whole new front-end framework. Not the easiest task to manage, but with proper people and tech, we knew the sky was the limit.

Start with a data platform

The backbone of our recommendation engine is a data platform where we’re constantly collecting behavioural data from our sites and users, as well as metadata information about the content itself. This means we can easily analyse what users are consuming and what they seem to be interested in.

This platform is highly scalable and customisable for current and future needs, regardless of whether it is an AI application, data science workflow, or dashboarding use case.

For the recommender itself, we set up a machine-learning model to find interest clusters on all of our users. The resulting output is basically based on a collaborative filtering approach in conjunction with a set of rules, identified via pre-data analysis, to address common issues like cold-starting problems or to fit paid articles beside free articles.

The whole team also intensively considered elements like the re-calculation/computing of the collaborative user/item matrix — not the easiest task if you think about a recalculation of the service every 15 minutes for a massive amount of user and item data.

In creating the recommender, the team had to consider elements like the recalculation/computing of the collaborative user/item matrix.
In creating the recommender, the team had to consider elements like the recalculation/computing of the collaborative user/item matrix.

Setup and data-driven evaluation

The project was a joint effort between several teams: data, product, tech, and an external partner. The first iteration of the recommender was designed for an area at the front page called “Für mich empfohlen,” consisting of six teasers overall.

So how do you evaluate the performance of such a service? We defined the CTR (click-through rate) of these teasers as our main KPIs to monitor, along with some more quality-based metrics like article reading behaviour.

The goal, of course, was to maximise the CTR and the other quality-KPI, but what do you compare against?

We decided to set up an A/B test — editorial team vs recommender — and 50% of the users approaching the front page were randomly distributed to group A and the other half to group B. The runtime of that experiment was about six weeks, and we continuously monitored the performance.

So what happened?

Let’s get to the exciting part: the results.

Honestly, there is always uncertainty when it comes to predicting the outcome of these new and innovative projects. We did our best to pre-evaluate our approach with data as best as possible, but there is still that element of uncertainty within. It is something you must be aware of and implement deeply in your culture when it comes to AI.

However, the results were super promising: Up to 80% uplift in CTR compared to the manual curation of content and up to 13% uplift in user engagement for the recommended articles. That initial step for our data platform and recommendation application showed us that our data enables us to successfully identify users interests and lift their user experience.

Looking at the future, these outcomes do look very promising when it comes to the usage of AI within our products. Besides developing new AI-driven applications, we focus on iterative workflows in tweaking existing applications, trying to find highly rewarding features and help shape the data-driven and AI culture of tomorrow.

About Robert Zilz

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.