Hi there. A couple of weeks ago I was asked a great question: “How do you benchmark a product?” Today, I delve into some of the considerations related to that question plus some best practices around deciding who works on what and for how long.
We have two fantastic events coming up that I want to bring your attentions to:
Tomorrow, join me live to start thinking about what news could look like in the metaverse. My Webinar starts at 10:00 a.m. New York (an hour earlier than usual in other parts of the world that haven’t yet switched to daylight savings time).
April’s product master class on how to build customer-informed products is now online. It’s an excellent line-up and will be interesting for anyone who talks about truly being customer led.
All the best, Jodie.
How do we benchmark product?
Over the last year, I’ve had a number of conversations about measuring the impact of product — which is tough. But at a recent presentation to the INMA board of directors, I was asked an even harder question: How do you benchmark a product?
The easy answer is whether people or advertising are paying for it and whether we are the leaders in the field. Of course it’s not that simple. People aren’t just paying for the experience; they are paying for the content. And decoupling the two is nigh on impossible.
Product is the sum of all the parts. Different features within a product are built and optimised to do different things, to solve a unique user problem. So most experiences are designed to appeal to a subset of users.
The Netflix definition of product is something that “delights customers in margin-enhancing ways.” So perhaps we should look at the “customer delight.” That could be done via app store rating for a mobile product or something like an NPS score. I know at least one media organisation is working on their own customer satisfaction score.
Or we could rely on the second part of that definition: If users like the experience, they will pay for it. The benchmark is therefore similar experiences, our competitors. Do people love our product enough to pay for it over other similar products?
Or perhaps we go deeper, measuring a number of different features and journeys by doing A/B tests. It would take a relatively complex formula to bring these together into some kind of score that decouples the experience from the content. But this is a lot of work and would come at the opportunity cost of working on the actual product.
I asked Karl Oskar Teien about how he thinks about this for paid products at Schibsted. He says he’s interested in “how people are approaching the intangible … the quality of a product and how that affects loyalty over time. Because it isn’t measurable with just basic click-through rates or other kind of leading indicators of engagement that we can use.”
It’s an interesting question and one that we as an industry don’t have an answer for. Which is also why product leaders and teams often focus on demonstrating impact on specific projects and results when they are asked about this internally.
How do you benchmark product at your organisation? I’d love to hear about it. E-mail me at jodie.hopperton@INMA.org.
Who works on what and for how long?
One of the biggest challenges I see, particularly in larger organisations, is duplication of work or, worse, people working on similar issues at cross purposes.
In product, it’s not always easy to decide who should work on the development of a product or feature or, more accurately, the problem that needs to be solved. The first step in this process is agreeing what the known problems are — not just in product, but having a shared view across the entire organisation. Having this shared view makes it a lot easier to figure out which teams have the tool kits to solve the issues at hand.
This is a lot more simple than it sounds. To understand user problems, or consumers where the business wants to be, there needs to be both qualitative and quantitative analysis. Why both? Because quantitative data, particularly audience data, means we only see what users are doing within the confines they have been given and because we see what they are doing — but not why they are doing it.
Using qualitative studies, we can delve into why users act in a certain way. Qualitative data gives so much insight but can’t be used standalone as it can be misleading if not supported by quantitative data (as we saw with saved articles in my last newsletter).
Once there is consensus, it makes it easier to match skillsets or amend/add to projects already in motion. These aren’t always product-led, but someone from the product team can be appended to add value and avoid duplication. And, of course, some problems go to the person most passionate, or most vocal, which isn’t always a bad thing.
Another consideration is how much time and resource goes into any project. As humans, it’s all too easy to get invested in a project all the way through to completion, despite unexpected hurdles along the way. Sometimes we need to check ourselves and make sure the effort is still in line with the reward. One way of doing this is time boxing problems/solutions and re-evaluating at the end of the given time.
Another approach I’ve seen recently is having a small team focus solely on “small things” that arise so other product teams don’t get distracted. Another company had an actual formula for effort vs. value.
Organisation culture comes into play with all of the above, but having that upfront understanding of issues makes everything else so much easier.
Date for the diary: Tomorrow we’re meeting the metaverse
Join Cyrus Saihan, head of AR/AV product partnerships in EMEA for Meta, and myself to discuss the metaverse. We’re about to go through another fundamental shift — comparable to the shift to mobile — and this is an opportunity to start discussing what that may look like.
How will people move between offline, 2D, and 3D? What exists today? And what are some of the things that we should be thinking about?
I hope you can join the discussion. The session is free to INMA members. Find out more and register here.
Tweet of the week
This tweet sparked a good debate on Twitter. Clearly we can’t all love what we do 100% of the time, and there will be elements of a product, features, that have to be built even if you don’t personally like them. But having an overall passion for the product as well as the product process is the magic combination.
- From my colleague Ariane Bernard who runs INMA’s Smart Data Initiative: Passive personalisation should become the industry norm.
- “Perfect moderation does not exist,” but here are some lessons from Twitter’s labels of Trump tweets.
About this newsletter
Today’s newsletter is written by Jodie Hopperton, based in Los Angeles and lead for the INMA Product Initiative. Jodie will share research, case studies, and thought leadership on the topic of global news media product.