Straits Times gamifies the reading experience to improve engagement
Ideas Blog | 14 March 2022
For people in news media product roles, the world revolves around product goals, with many different sprints to get there.
V’ming Chew, lead product manager at the Singapore-based print, digital, radio, and outdoor media organisation Singapore Press Holdings, shared a slide of what this world looks like at INMA’s Master Class on Methodologies to Launch and Innovate Products:
“Everything is a feedback loop,” Chew said. “The key thing I want to mention is the build, measure, and learn.”
Chew and his team recently worked with its Singapore-based English-language broadsheet Straits Times (ST) to gamify the reading experience.
First, the team had to set objectives for the project:
Improve long-term engagement and convert casual readers to engaged readers.
Validate the effectiveness of non-monetary perks.
Develop a gamification playbook.
Streamline the workflow of rewards management and fulfilment.
How it works
In the gamification model, the user sees a prompt when they visit an article page for the first time. They are then exposed to the “ST Read and Win” daily completion metre that shows how many articles they must read and have read to reach the goals. Then, they’re notified of the rewards they’ve earned at completion.
“I think one of the most important things is establishing the objectives,” Chew said. “The next important thing is to agree on metrics. You need to make sure they’re intuitive and understandable for stakeholders.”
Everyone should also agree on what success looks like. The metrics should be as out-of-the-box as possible, easy to drive and check, and must flow from the objectives.
He shared some of the ST metrics that were used for each objective:
Improving long-term engagement: pages and sessions per user, session duration, bounce rate, traffic for campaign page, sign-up rate, and percentage of engaged users over time.
Effectiveness of non-monetary perks: participation rate over time.
Gamification playbook: learnings and how those learnings are incorporated into the next iteration.
Workflow of rewards: operational efficiency and cost of sustaining the campaign.
Identifying iterations
Chew also explained the five iterations of the project.
Iteration 1: Roll out basic completion meter. The hypothesis was that the completion meter mechanism could attract users to read more and drive engagement, thereby becoming more loyal. In the first phase, participants showed increased engagement via higher session duration and pageviews and a lower bounce rate. Campaign participation was .62%, and the team planned to increase that in subsequent iterations.
Iteration 2: Increase discoverability with red dot. The hypothesis was that by making it visually obvious that a campaign exists, participation rates would increase. This proved true, as the campaign page penetration rate increased by 33%, from .6% to .8%, which Chew said is healthy for such campaigns by industry standards.
Iteration 3: Optimise the number of articles in a single completion meter without degrading engagement. The hypothesis was that an optimum target set for readers would encourage them to continue engaging and stay loyal. To test this, three treatment groups were shown the campaign with daily targets of three, five, and seven articles respectively. A control group was not exposed to the campaign. The team made a number of observations and inferences during this phase:
Estimated an 18% lift in pages/user for participants.
Saw a 24% increase in engaged, registered users.
Saw a 4% increase in the number of registered users.
Iteration 4: Measure lift in engagement with low-cost incentives. The hypothesis was that they could provide only low-cost incentives and users would still participate in the campaign and show a similar lift in engagement. The first big reward, a tablet, was removed from a treatment group and replaced with low-cost incentives such as magazines, so engagement could be compared with other groups receiving that reward. Users continued to engage.
Iteration 5: Seven-day streak. The hypothesis was that if users were given only visual feedback, and not rewards when they completed multi-day streaks, their multi-day engagement would increase. The goal was to verify if a seven-day streak, in combination with a daily completion meter, could improve repeat visits while maintaining engagement. This involved three variants. The daily did better than the seven-day variant.
Variant A: daily completion meter for five articles.
Variant B: seven-day completion meter with a fireworks animation for each day completed.
Variant C: the control group.
This case study originally appeared in the INMA report, 7 Steps to a Successful Media Product Process.