ALERT: Newsroom Transformation Master Class starts Thursday, click here to register now

What important data information are our dashboards not gathering?

By Ariane Bernard

INMA

New York City, Paris

Connect      

Hi all. For the second issue in a row, I’m letting the agenda of my newsletter be driven by conversations I’ve had at conferences. Which means these were successful trips! After all, that’s why one goes to conferences: for inspiration and education.  Are you back on the circuit yourself? 

I hope you’ll join Jodie Hopperton, the lead of the product initiative here, and myself  for our Product and Data for Media Summit, which is just a few weeks out, from November 3 to 17. Virtual this one; no carbon offset necessary. 

All my best, Ariane

“Are there aspects of journalism that cannot be captured by data?”

I’m writing this newsletter from Vienna, having just wrapped up a weekend of un-conferencing at Newsgeist Europe, a gathering of journalism, tech, and academic folks convened by Google — this year’s edition was in Bratislava, Slovakia.

Smart people and conversations make for interesting material for this newsletter — in particular,  a discussion session brought by the CEO of a German publisher there (they will remain unnamed for this as Newsgeist operates under Chatham House rules).

Newsroom dashboards share what their tools are told to track, leaving out many other aspects of journalism we should be tracking.
Newsroom dashboards share what their tools are told to track, leaving out many other aspects of journalism we should be tracking.

The topic was this provocative question: “Are there aspects of journalism that cannot be captured by data?” 

The German publisher who offered up this discussion was rightly concerned that dashboards tend to be focused on engagement metrics and the individual performance of articles or other Web site features. And I certainly share the view of our conversation facilitator that there are a lot of pitfalls to keeping our understanding of performance narrowly tied to engagement metrics.

“A lot of what we chase ends up being what our dashboards are set up to display,” our German publisher said. (I am paraphrasing because my notes are embarrassingly bad. This is what happens when the conversation is engaging and this newsletter writer ends up running out of phone battery to take the audio recording).

It’s certainly not the first time I have heard the consternation of folks in our space who are concerned that our understanding of what is good or desirable is shaped far more by what analytics tools are configured to tell us than by refined analysis of what we ought to track.

So in this weeks edition, we are going to look at KPI design and blended metrics. 

A KPI can be made of many different metrics.
A KPI can be made of many different metrics.

How KPIs and metrics differ and why it matters

First, a little bit of definition here. We often use the term KPI and metrics interchangeably. But these are very different: 

  • A KPI (key performance indicator) is not necessarily one metric. A KPI can be a complex object, made of many things. For example, “company revenue” is a KPI, and it is made of many different streams. 

  • But in analytics, there’s a common conflation. We will call scroll depth, for example, a KPI. But for most publishers I can think of, scroll depth isn’t a KPI — it’s a metric. As in, we don’t care about scroll depth as a single end goal. It’s just one of the many things we look at to understand the performance of (whatever screen we are talking about).

Why does this little bit of lexical distinction matter so much? Because when we think of our dashboards and every reporting screen, we should evaluate whether the thing we are looking at is really a KPI or if it’s the participating metric of a KPI. 

Here’s a bit of an opinion here: Most people who aren’t in data analytics positions (or audience development) would be better served by fewer dashboards of raw analytics. If, as an editor, you worry about scroll depth, you are looking at one of the many angles of your engagement. Individually, lots of different things can affect scroll depth — so much so that to look at the scroll depth of a given article and say “Yikes, 78% of users don’t scroll” doesn’t contain as much information as it sounds.

Going back to our German publisher. 

They were particularly concerned that entire strategies were being built from these numbers and dashboards, and really, their original question — “Are there aspects of journalism that are not captured by data” — is a rightful accusation of bad KPI design. If I should get allegorical, our dashboards should be there to give us health updates on the state of our forests. And often, they are set up to give minute updates about the status of individual buds of the tree. 

A good KPI is an exercise that walks from product (goals) to data. A goal may be “habituate our user to our site.” Walking backwards from this goal, we may identify that certain activities participate in habituation: for example, how many articles we read in a session, the time we spend on an article, whether we share articles. 

But further breaking down these activities, we see certain engagement metrics will speak to these activities happening. We will also consider that these engagement metrics are somewhat relative: Not all our articles and screens are similar, so raw metrics aren’t very useful. But understanding these metrics relative to the specifics of certain screens will be more correct. 

For example, certain highly visual articles are naturally scroll-inducing (Exhibit A: The Daily Mail Web site), while hard news is not as scroll-inducing simply because the classic “inverted pyramid of news” guarantees the lede does the bulk of the job in delivering the topline.  

The heart of identifying what activities participate in the goal (and therefore what metrics and at what level) is where we return to our data analysis. 

So, you may readily observe two things:

  • Checking up on our granular metrics isn’t wrong, but it’s so granular that it may be more confusing than it’s worth. This feeds the concern of whether data is missing the point of journalism.

  • Not every metric is necessarily as interesting as the next one, and dashboards don’t do a very good job of representing the relative importance of one metric versus the other. 

Blended metrics offer a better analysis of what we need

This is why I am always so happy to hear about publishers who move away from dashboards of raw analytics to move to blended metrics. 

Blended metrics are usually described as scores (not all scores are blended metrics, but all blended metrics are scores).

The very idea of blended metrics is to tumble the various metrics that together make a more useful KPI and allow your data department to manipulate the component metrics by taking into account certain characteristics of a screen where you are collecting these metrics.

If I take the example of my magazine article versus my hard news article, a blended metric may use scroll-depth, factorize this raw metric differently depending on the type of article (expect that a baseline of less scroll-depth is normal for hard news articles), and further consider time spent to more fairly take into account the presence of visual items that may hold the attention of our users even if they do not induce scroll. 

The Financial Times has a blended metric called "Quality Reads" that determines whether an article could sustain reader habit.
The Financial Times has a blended metric called "Quality Reads" that determines whether an article could sustain reader habit.

Here’s a great example, from the Financial Times:

The FT created what has become a famous score, which they call RFV (recency, frequency, volume). Now, this is a blended metric for a user. But they also have another one called “Quality Reads,” which blends various metrics to determine whether an article is creating the kind of engaged behaviour that sustains habituation.

Quality Reads couldn’t exist from anything other than a blend. The raw engagement metrics Quality Reads is built out of are then processed (against baselines) so the single score you end up with speaks far more intelligibly about the actual KPI you wanted to track.

You never really wanted to track scroll depth. You wanted to track whether your user was spending good time with your article. 

Further afield on the wide, wide Web

The New York Times last week reported on a large-scale experiment LinkedIn ran on its users (this link is a gift link from me and will bypass the paywall) and how the experiment affected the kind of connections LinkedIn users made, which, down the road, had an impact on the job finding prospects of these members.

The experiment is stirring some controversy over whether the test therefore created more favourable networking perspectives for users who were in the test versus those in the control group.

From another corner of “algorithms manipulating things” (but not people this time), a look at Midjourney, one of several new AI-powered image generators that have emerged in recent months. The author in New York Magazine argues there is a certain usefulness to these fun tools — as an early stage illustration generator.

Sure, it’s meant for art first. But I have to say, I hadn’t thought about how these tools could in fact be useful in a more immediate sense.

About this newsletter

Today’s newsletter is written by Ariane Bernard, a Paris- and New York-based consultant who focuses on publishing utilities and data products, and is the CEO of a young incubated company, Helio.cloud

This newsletter is part of the INMA Smart Data Initiative. You can e-mail me at Ariane.Bernard@inma.org with thoughts, suggestions, and questions. Also, sign up to our Slack channel.

About Ariane Bernard

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.
x

I ACCEPT