While I was nosing around Stuff’s Web site looking at how audio is built in (more on that here), I noticed a sentiment tracker. When I asked CPO Ben Haywood about it, he told me that “when combined with the article metadata, the data captured from the survey allows us to understand audience sentiment on issues over time.”
I was intrigued, so I delved further.
The initiative is driven by the data team, and so I spoke to Dina Hay, chief data and insights officer, and Amanda Lane, head of research and insight, to learn more about the thinking behind it, what they have learnt, and how they want to expand it in future.
To give some context, this is the sentiment tracker that currently appears on the site:
The Stuff team noticed something similar at Rappler in the Philippines (run by the incredible Maria Ressa). Once you have taken part on their site, you see the results from other people (in the same way Instagram polls run).
If there is enough data, you can track sentiments towards issues, people, and brands at any given point, or how it changes over time. Stuff wondered how it could be used in New Zealand, so the team has developed an MVP and has done a lot of testing over the past few months.
What I hadn’t seen, because I hadn’t originally clicked, is this:
They are gathering quantitative and qualitative data. Anyone who knows me knows that is music to my ears ;) It allows them to go even deeper with the “why” as well as the “what.”
Their sentiment tracker is now beyond an MVP and is shown to most readers with a couple of exceptions:
- Editorial can exclude it if they think the article is inappropriate for this purpose.
- It is limited to scroll speed and depth so it only shows to people who have read the article.
So why do they think this is worth the investment?
First and foremost, Stuff considers this a B2B tool. It’s a revenue driver. They have their finger on the pulse of the sentiments of the nation, and they can monetise that with businesses, brands, industry bodies, public figures, and probably more.
The team is working out the boundaries of this. I got excited by the potential to use this to understand user needs. For example, if you look at The Atlantic user needs (article here), something that came up was “give me a break.” A news organisation could therefore use this to see if they are publishing too many hard news stories and need to add some softer, more positive sentiment stories.
With advanced sequencing, the next read could directly relate to the sentiment a reader has just registered. Dina and Amanda were quick to say they are ironing out the framework for this to be used. It can’t be a free for all access internally until they understand the possible downsides (just think about how Facebook can skew information by using sentiment).
Executive Chair and Publisher Sinead Boucher told me: “We absolutely see the power of this as a tool to improve our content and experiences for our audience, as much as for the potential revenue stream. One of our strategic goals is to have the deepest and richest knowledge of New Zealanders and to use that to develop great content and products for them.
“Stuff’s vision is to be the most trusted organisation in New Zealand. We know from our external research that part of what builds trust is that users feel they are understood, listened to, and respected. So that is very much at the front of our minds here, too, both with how we collect and use the data, but also in how we use it to enrich what we produce.”
Sentiment analysis is in the early stages, but it’s an interesting project. They get 14k responses per day, which is a statistically meaningful data point for a country of five million people. Yes sentiment tracking has been done before, notably by BuzzFeed, but I don’t remember the results (this may be my bad, not theirs). But when posted on all news, and especially when combined with other data, the results will not just be interesting. They will be actionable and monetisable.
One question I learnt from my colleague Ariane Bernard, lead for the INMA Smart Data Initiative, is that data gathering of this kind can often be at the polar opposites. For example, I only post two types of reviews on Yelp: for places/services I love and those I hate. There is no middle ground. Amanda told me they have consistently seen a range of emotions during the trial phases, suggesting that this wasn’t the case here.
They have seen a number of advantages launching as an MVP, in many of the ways you would expect: seeing if and how people use it and enabling UX tweaks before full rollout. But something else significant happened. The tool was spotted by a professor at a local university who contacted them to discuss it. He was so curious and it fit so closely with his work that he became an adviser.
Through his input, they have been able to work on combining sentiment to enlarge the data pool from six emotions to 36. In the future, users will be able to choose two sentiments, which gives more complexity, e.g. “concerned” and “angry” could equal “anxiety.” When launched, this will give even more data points to use.
There is much that will be rolled out in the next 12 to 18 months to develop this tool further. We’ll be following to see what other results and surprises it brings.
If you’d like to subscribe to my bi-weekly newsletter, INMA members can do so here.