Why it’s difficult for media companies to measure audience satisfaction
Satisfying Audiences Blog | 23 August 2015

We rate everything these days. From the inflatable water slide we bought on Amazon to the guy who gave us a ride to the airport. Hey, that Uber driver even rates us as passengers. I always ask the driver for mine.
So, as I think about how to satisfy audiences in a world in which our product is content, I struggle to understand why it’s so difficult to measure.
It may be we are using very imprecise measurements as proxies.
Here’s what I mean: With frequency of visit, a low measurement may have more to do with the person’s lifestyle and personal rhythm for seeking news and information than how much they like what we provide.
With time spent, I’ve heard content providers argue that a low amount of time means you are providing the user with what they want right away so they can get on with their day, which should be a good thing, right?
If we look at the number of page views per visitor believing a higher number references deeper engagement and satisfaction, we could be fooling ourselves that instead what we have is bad site navigation that requires several clicks to get to what the user wants. Even our native app rating measures are beholden to load time and feature satisfaction.
I have only been in this industry for two years now. I come from a company that sold a different type of product. So I've been reflecting on how I’ve measured audience satisfaction in the past.
For example, in many industries in which a product is being sold, there is a rating called a “net promoter score,” which helps a buyer understand how many product users would recommend it to the next potential buyer. Of course, it combines several individual metrics, such as satisfaction in delivering on the promise of the product as well as value for its relative cost.
In a marketing world, a great brand has great value.
Strong brands get to enjoy a sense of loyalty from their users that afford the brands a few slip-ups without losing the chance for future purchases. As a result, marketers seek to build great brands and have figured out how to measure brand strength through “sentiment.”
I tried measuring brand sentiment for The Dallas Morning News. The difficulty came when it was nearly impossible to remove the variability of the content covered from The Dallas Morning News itself.
For example, if it’s particularly depressing news month, consumers may appear to be upset or angry with us, when really they are expressing feelings about the news we’ve delivered. It’s rarely a reflection of whether the coverage of the news story was fair, balanced, and thorough.
We seek to provide what we call PICA (perspective, insight, context, and analysis) on every story we publish. It’s an acronym we use and a standard by which we measure ourselves internally.
So how are we doing?
First, the audience would have to know that’s our promise in our coverage.
Second, they’d need a mechanism for measuring us against it.
Third, they’d need a way to hold us accountable by sharing their opinions, and in large numbers. Focus groups are a small slice of our audience.
Comments on a per-story basis are helpful but also a fraction of the total viewing audience. They are often a platform for discussion about the content rather than our coverage of the story, and intentionally so.
The same can be said for tweets, letters to the editor, comments on Facebook, and other ways readers communicate with us and our readers. Most of that is meant to be a conversation about the issues, not us.
Perhaps there is no “perfect” measurement for audience satisfaction. And while a person’s mood and circumstances greatly affect their ratings at any moment in time, we will continue to improve and look for leading indicators that point to whether we are succeeding.
Besides, if it was black or white, or a strict number scale, we may not like our score. So, do you want to know my Uber rating?