Unsupervised learning and personalisation are useful but not ideal for journalism

By Ariane Bernard

INMA

New York, Paris

Connect      

The contentious understanding that personalisation can distract from the editorial mission is based on one large assumption: That a personalisation algorithm can only consider all links “equally.” That is, a Web site with one million articles will consider all these content links to all be equal to start. 

And if there is one thing a newsroom will agree on, it’s this: Not all articles are equal at all. 

Now this assumption that personalisation of the content selection will treat each article as an equal is correct with unsupervised recommendation — “unsupervised” in the context of machine learning means that algorithms are working from untagged data. I discuss “supervised” here.

Say the recommendation engine is given only articles, on the one hand, and user behaviour toward these links on the other hand. The recommendation engine, no matter what algorithm is used, is never going to understand that some articles are “not equal” (more editorially meaningful) when looking just at the article side of things —  simply because there are no parameters attached to the articles to differentiate them in the first place.

The algorithm will be able to act on the articles based on factors coming from user behaviors like “highly shared” and “good scroll depth.” But it doesn’t recognise importance, just popularity. So the article about Beyoncé is likely to come on top of the deeply reported investigation in the corruption of some institution. 

The New York Times used a topic extraction algorithm for more successful personalisation.
The New York Times used a topic extraction algorithm for more successful personalisation.

Unsupervised learning can still produce some very good personalised feeds. The New York Times found out several years ago that using the Latent Dirichlet Allocation topic extraction algorithm was more successful as a recommendation method than others — meaning that allowing recommendation to run on a clustering of articles around topic did correspond to an increase in reading.

Unsupervised learning will eventually produce winners and losers, but they won’t be optimised to quality because quality is neither a provided input nor a measurable outcome. Clicks, shares, engagement minutes, subscriptions — these are, in analytics terms, “events.” And as events, they are trackable. Since they are trackable, they feed back into the recommendation engine as the success metric of the recommendation.

But quality is not a measurable metric. The model will only correlate with quality by chance (if it does), not by design.

It follows that if a personalisation strategy wants to use some differentiating factor of article quality (which is another way of saying “not all articles are equal”), this information can only come from a tag — a tag created by the humans who have the most competence in assessing the article quality. This approach also works with a score given by humans. A score will allow sorting and the subsetting of the article in different value groups. This is an approach NZZ once took with its personalised news app. 

More on that in my next blog.

If you’d like to subscribe to my bi-weekly newsletter, INMA members can do so here.

About Ariane Bernard

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.
x

I ACCEPT