As our publishers become increasingly dependent on revenue from digital users, Schibsted is investing in technology and data tools to help us grow our digital subscriber base. One such solution is our award-winning Subscription Purchase Prediction Model.
We have previously shared how we have used this model to optimise marketing on Facebook, with experiments showing significantly improved conversion rates and ROI. We also have used the model to target users on our own sites and improve telemarketing conversion rates by 540%.
The model predicts the likelihood of an individual user purchasing a subscription, based on their behaviour on our Web sites and apps. To do this, we train a machine learning algorithm on a data set of all logged-in users from a given observation period, during which they do not have an active subscription — but some of them do go on to subscribe in the following target period.
The algorithm learns the difference in behaviour patterns between those who do not purchase and those who do purchase during the target period.
The algorithm crunches its way through many variables. Some of the most useful variables that emerged are obvious, such as recency (how long since we have seen you on our site?), frequency (how many days did we see you during the observation period?), and volume of content consumed.
There are also some less obvious signals, such as the proportion of days visited that are weekend days and the number of devices used to visit our site.
We can test the model on historical data to ensure it performs well, and then we can use that model to make prediction scores for all of our logged-in users today based on their recent behaviour.
The output scores can then be used to optimise our sales initiatives across channels by targeting users who are most likely to purchase a subscription.
The prediction model was originally developed for, and in close collaboration with Aftenposten, but in a way that made it easily scalable to our other publishers in Schibsted. We currently have the model running in production on a weekly basis for four of our publishers, with three more planned for roll-out in the coming quarter.
At Aftenposten, we have been carrying out telemarketing to registered users for some time. Previously we’ve done this in a mostly unsegmented fashion, with users chosen randomly to be called. Over time, we have seen a stable average conversion rate of 1%. In other words, out of all the users we contact, 1% of them purchase a subscription.
Earlier this year, we carried out an experiment to see if targeting users for phone calls based on their Subscription Purchase score from our model would yield a higher conversion rate. There were two groups prepared for the experiment: one group taken randomly per our usual practice, and another group selected from the top 10% of users with highest Subscription Purchase score.
Both groups were contacted in the same way and over the same time period. We were pleased to see that the targeted group converted directly from the telemarketing call at a rate of 6%, which is six times higher than the randomly selected group, who converted at the expected baseline rate of 1%.
Since the experiment, we have started targeting our calls based on Subscription Purchase scores on a weekly basis, increasing the volume of users selected and still maintaining a good average conversion rate of 5.4%.
The results seen at Aftenposten encouraged us to try a similar approach for another of our newspapers, Faedrelandsvennen, where we have not tried telemarketing before. In this case, we have seen a conversion rate of 8.8%.
Another use case for our Subscription Purchase Prediction Model is to tailor users’ news experience and our in-product communication based on the model. To verify this we carried out an experiment on Bergens Tidende’s (BT’s) mobile app to see if targeting an ad for a BT subscription based on the scores would result in higher engagement and conversion.
Two groups were prepared accordingly: one control group randomly selected, and another group selected because they had high Subscription Purchase scores. The experiment ran for one week, during which both groups of users would see the ad at the top of their news feed when and if they opened the BT app on their mobile.
In that time, the number of subscriptions sold via this channel was too small to draw strong conclusions from, so we also looked at two other metrics: impression rate and click-through rate. Impression rates were 6.3% and 28.2% for the random and targeted groups respectively, while the click-through rates were 0.7% and 1.3%, with statistical significance at 95% confidence.
This experiment proved that the users most likely to buy a subscription according to the model were both more likely to open the app to see the ad, and more likely to click the ad if they saw it, thus demonstrating the potential of using such scores within the product.
Going forward we will scale the model to more of our sites, continue to experiment with new ways of using the model, and automate successful use cases to improve day-to-day operations.
One idea we are looking into is how we can use the scores in dynamic paywalls to determine the prominence of paid versus free articles in users’ personalised news feeds and as a signal of intent in our audience targeting ads offering.