What important data are our dashboards not gathering?
Smart Data Initiative Blog | 09 October 2022
I’m writing this newsletter from Vienna, having just wrapped up a weekend of un-conferencing at Newsgeist Europe, a gathering of journalism, tech, and academic folks convened by Google — this year’s edition was in Bratislava, Slovakia.
Smart people and conversations make for interesting material for this newsletter — in particular, a discussion session brought by the CEO of a German publisher there (they will remain unnamed for this as Newsgeist operates under Chatham House rules).
The topic was this provocative question: “Are there aspects of journalism that cannot be captured by data?”
The German publisher who offered up this discussion was rightly concerned that dashboards tend to be focused on engagement metrics and the individual performance of articles or other Web site features. And I certainly share the view of our conversation facilitator that there are a lot of pitfalls to keeping our understanding of performance narrowly tied to engagement metrics.
“A lot of what we chase ends up being what our dashboards are set up to display,” our German publisher said. (I am paraphrasing because my notes are embarrassingly bad. This is what happens when the conversation is engaging and this newsletter writer ends up running out of phone battery to take the audio recording).
It’s certainly not the first time I have heard the consternation of folks in our space who are concerned that our understanding of what is good or desirable is shaped far more by what analytics tools are configured to tell us than by refined analysis of what we ought to track.
So in this week’s edition, we are going to look at KPI design and blended metrics.
How KPIs and metrics differ
First, a little bit of definition here. We often use the term KPI and metrics interchangeably. But these are very different:
A KPI (key performance indicator) is not necessarily one metric. A KPI can be a complex object, made of many things. For example, “company revenue” is a KPI, and it is made of many different streams.
But in analytics, there’s a common conflation. We will call scroll depth, for example, a KPI. But for most publishers I can think of, scroll depth isn’t a KPI — it’s a metric. As in, we don’t care about scroll depth as a single end goal. It’s just one of the many things we look at to understand the performance of (whatever screen we are talking about).
Why does this little bit of lexical distinction matter so much? Because when we think of our dashboards and every reporting screen, we should evaluate whether the thing we are looking at is really a KPI or if it’s the participating metric of a KPI.
Here’s a bit of an opinion here: Most people who aren’t in data analytics positions (or audience development) would be better served by fewer dashboards of raw analytics. If, as an editor, you worry about scroll depth, you are looking at one of the many angles of your engagement. Individually, lots of different things can affect scroll depth — so much so that to look at the scroll depth of a given article and say “Yikes, 78% of users don’t scroll” doesn’t contain as much information as it sounds.
Why it matters
Going back to our German publisher.
They were particularly concerned that entire strategies were being built from these numbers and dashboards, and really, their original question — “Are there aspects of journalism that are not captured by data” — is a rightful accusation of bad KPI design. If I should get allegorical, our dashboards should be there to give us health updates on the state of our forests. And often, they are set up to give minute updates about the status of individual buds of the tree.
A good KPI is an exercise that walks from product (goals) to data. A goal may be “habituate our user to our site.” Walking backwards from this goal, we may identify that certain activities participate in habituation: for example, how many articles we read in a session, the time we spend on an article, whether we share articles.
But further breaking down these activities, we see certain engagement metrics will speak to these activities happening. We will also consider that these engagement metrics are somewhat relative: Not all our articles and screens are similar, so raw metrics aren’t very useful. But understanding these metrics relative to the specifics of certain screens will be more correct.
For example, certain highly visual articles are naturally scroll-inducing (Exhibit A: The Daily Mail Web site), while hard news is not as scroll-inducing simply because the classic “inverted pyramid of news” guarantees the lede does the bulk of the job in delivering the topline.
The heart of identifying what activities participate in the goal (and therefore what metrics and at what level) is where we return to our data analysis.
So, you may readily observe two things:
Checking up on our granular metrics isn’t wrong, but it’s so granular that it may be more confusing than it’s worth. This feeds the concern of whether data is missing the point of journalism.
Not every metric is necessarily as interesting as the next one, and dashboards don’t do a very good job of representing the relative importance of one metric versus the other.
If you’d like to subscribe to my bi-weekly newsletter, INMA members can do so here.