Relative ranking is an imperfect system of judging news articles
Smart Data Initiative Blog | 24 October 2022
In my previous newsletter, I wrote about the concern of whether we were chasing the “right” thing when we were chasing after simple engagement KPIs (pageviews, scroll depth etc). Whether you’re a product manager who is looking to understand habituation (but, of course, there’s no single metric for habituation) or a newsroom editor who is trying to gauge interest for your organisation’s coverage, the ambiguity of looking at your data is, well, is that even the right data?
But then, there is the question of how we analyse the data itself, which is what this week’s newsletter is about.
To explain: We spend a lot of time looking at rankings when it comes to analytics. Top 10 lists, Most Read/Best Performing, etc.
Two reasons:
- Lists cram a lot of simple information. Understanding things relatively is a lot easier than placing the importance of things against an absolute. Anyone understands “A is bigger than B,” and you need to understand neither A nor B to understand this sentence. However, “A got a score of 75” requires that we understand what 75 means.
- We all have an interest in leveraging the Pareto principle, which you may know as the 80/20 rule. We know that focusing a fraction of our effort on just a few things can have maximum impact because not everything has the potential of equal impact in the first place. So we look at a Top 10 list, and we think “that’s really where most of our money is, anyway.” And that’s true.
Yet there are false equivalencies and missed opportunities in assessing things through that relative lens — which is that we often overlook how the data has been handicapped in the first place.
Handicaps can be:
The amount of promotion your content received. Did it make the homepage? Did it get heavily promoted in social?
The clarity of the headline. For a news-y article, the headline may contain a significant amount of the information. Readers may just be satisfied with scanning this headline rather than giving you a click to further explore.
How trendy the article is (flash-in-the-pan traffic or long-term traffic contributor that will be part of your long tail)?
“I no longer want to show Top 10s,” said Janis Kitzhofer, head of editorial analysis at Axel Springer in Germany. “I show people individual articles and I ask them, ‘Do you think this is a good performance or not?’ and we look at individual metrics.”
What Janis is doing in shifting the conversation to the individual performance of an article is that he is really relying on baselines. When we start to recognise “this is good,” it’s because we’ve internalised what good even is. And we are also going to factor in what we know about the handicaps of this article: “Yes, this is good performance, but considering this article sat high on the homepage for 24 hours, maybe it’s only just OK.”
If you’d like to subscribe to my bi-weekly newsletter, INMA members can do so here.