“Dont let anyone just tell you what they value. Look at their budget and daily calendar and you can tell them what they value.”

This was some of the best advice Ive ever been given.

It was early in my career. I was managing my first truly complex business intelligence programme, and I was struggling to map business objectives to an analytics roadmap. Even at executive levels, opinions about the right objectives to quantify and measure were all over the map.

Essentially, no one could agree on what our key performance metrics were. I was stuck and it was that advice that helped me most at the time: I needed to cut through the empty statements of intent and opinion and come to terms with the truth.

The first step was understanding the present reality, even if we wanted to believe otherwise.

Today when it comes to key performance metrics for publishers in a digital world, everyone seems to have an opinion or new philosophy on what the new gold standard is – the right things to track and measure to incent the “right” behaviours.

I can’t begin to count the articles, blog posts, and tweets I’ve seen on the subject lately. All the opinions are grounded in good intentions – but they are also all over the map.

Some point to new radical analytical methods to precisely measure specific reader behaviour. Others extol the virtues of some new magical software “all publishers must have” (it’s just one line of tracking code on your Web site!), while some argue against any metrics at all.

The fact is, there is no silver bullet. Not because it’s all too complicated, but because circumstance is half the equation and people are starting in the wrong place.

When we talk about metrics and measurement, we are essentially talking about ways to count events that are meaningful to the business. In the case of publishers, most of those sorts of events are likely to revolve around reader behaviour relative to an experience being offered.

Remembering that old advice again in this context seems prudent: It’s time to ask some honest questions about reality before becoming obsessed with any fancy new math.

What are the events that cause good or bad things to happen in terms of your organisation’s bottom line? How important are the events and behaviours that are more closely tied to the qualitative characteristics of your brand or product?

Cold, maybe brutal, honesty here will lead you to a basic understanding of the metrics most right for your organisation. It’s only at this point that you can start to focus on data and the science required to derive the precise and salient information about your business that will let you drive performance.

But, be warned. That first step should lead you to some harsh truths that are too often clouded over by a desire to start with the next big measurement scheme that likely has nothing to do with the reality you face and live.

You see, if the fundamental truth is that an organisation drives all its value from box and banner ads, then it must acknowledge that more “hits” are good and fewer “hits” are bad – and so, scoff as some may, that antiquated old page view metric may well be a very appropriate KPI to obsess over. Despising the fact doesn’t make it untrue.

I’m being somewhat cynical here to prove a pretty obvious point: Your metrics are not your business strategy; they measure the reality and efficacy of your business strategy. You can’t confuse a need to solve one problem with a desire to solve the other.

Can data-driven insight and advanced statistical methods serve as a catalyst for organisational or strategic change? Absolutely, they can. In fact, it should be an expectation.

I would argue that a truly effective, self-respecting data science team should always be pushing the boundary between deriving the best possible metrics for what the business values today and identifying what they could and should value tomorrow. In fact, one leads to the other – and part of a data analytics team’s purpose is to establish a shared sense of this across the organisation in a meaningful way.

Assuming then that you are one of the lucky ones who has gazed into the reflecting pool and know where you stand (and/or want to be), here are some things to consider from a data science perspective:

  1. Start at the atomic level.

    Performance metrics must capture the subtleties and nuances of change. This is the whole point. Aggregate too early or too often and it is possible you will lose the sensitivity to change that you need to have to make decisions.

    Start with the most granular bits and then aggregate up a hierarchy until you find the balance between a value you can associate to a meaningful population that still remains sensitive enough to pop when good or bad things happen. Essentially you are “instrumenting” data capture at a reader level for the events you care about.

    For example, if you are looking to establish a new currency in the market tied to a certain type or level of engagement, then you need to define the action and instrument the environment for every reader so that you understand the range of possible outcomes with as little latency as the business can tolerate. Don’t get lost in the averages too early.

  2. Establish context.

    So, now you’re streaming data to capture events that represent value but can you differentiate success from failure? The only way to truly tell is if youre consistently judging against an appropriate baseline.

    Developing metric-centric baselines is as much an art as it is a science. Context here is key as you need to understand what to account for in your baseline calculations and how to avoid biasing the results – you need an objective standard to measure against.

    For example, if you’re establishing churn metrics for assessing the value of loyalty, then they must be normalised for things like seasonality and product maturity so that you can truly trust your indexes.

    The guiding principle here: Don’t trust a number offered up without something to compare it to.

  3. Pay attention to the time.

    This should go without saying but most metrics are meaningless without some form of time dimension. Given all the trends in technology and digital publishing at the moment, it doesn’t take long to identify or predict major shifts. Getting ahead of this trend is becoming increasingly critical.

    The most powerful tool you have to identify the implications of a change to the things you value are in the time series of your data.

Event data instrumentation, context, and time series: These are the components of smart metrics. But when it comes to defining the metrics themselves analytically speaking, that begins with a conversation about the current business reality or the desired state you’re aspiring to.

Before dealing with either side of the spectrum, hold up the mirror first and be honest with what you see.