5 common pitfalls publishers should avoid in digital subscription experimentation
Conference Blog | 09 November 2020
Learning how to develop an experimental mindset was a large part of the , run in partnership with Google, FT Strategies, and INMA. Lou Gautier, principal at FT Strategies, outlined five common pitfalls identified during the 20+ experiments run by eight participating publishers during a live European Media Subscriptions Town Hall on Thursday morning.
1. Prioritisation
Gautier warned publishers should not pursue “interesting” initiatives seen at competitors or requested by management. Publishers often have so few resources, it is crucial to pick experiments that align with and support actual goals, she said.
“A good way to do that is to refer to your key objectives,” she said. “The framework we’re using is really the outcomes, aligned with the North Star goal.”
2. Skipping the discovery phase
Jumping to potential solutions before researching and defining parameters means publishers could spend time designing solutions to unclear problems.
“It is natural when you see a problem to try and come up with a solution,” she said. “It is human nature.”
For example, a publisher might see readers are not frequently returning to engage with the product, Gautier said. They might think a newsletter could help solve an engagement problem, when really the issue could be in how content is presented on the Web site. If a media company does not really understand the problem, “you’re going to over-invest on a solution that really isn’t addressing the problem,” she said.
3. Overdesign
Initial solutions should be rough and experimental so teams can quickly respond if something is not working. Focusing on creating a complete and thoroughly designed solution can also pull attention away from the real issue: “You need to design your experiments to identify and solve one problem and not three.”
4. Isolating parameters
Implementing changes widely and hoping for the best will lead to non-specific, inconclusive results. When in doubt, return to the hypothesis. “There are so many variables that change in an experiment that you cannot understand what factor is changing the result.”
5. Testing
Simple, direct A/B testing is crucial to implement whenever possible, but if it is not, publishers must clearly understand how to make test results significant.
“When you are defining the hypothesis, try as much as you can to identify how you can validate and ensure the experiment is successful,” Gautier said.
The 20+ experiments conducted during the GNI Subscriptions Lab primarily focused on growing, engaging, converting, and retaining readers.
“We are pretty proud of some of the results and the successes,” Gautier said. She quickly added they were not all successes, but the failures were just as informative.
Gautier asked publishers to share their failures as they move forward with their own experiments: “Don’t chase the success, chase the learnings.”
Case study: Unidad Editorial in Spain
Unidad Editorial conducted two experiments during the Lab. Germán Frassa, director of digital product and audience development, said the first was a failure: “We couldn’t prove our hypothesis.”
Publishers with freemium paywalls, like Unidad Editorial, often raise questions about how many articles to lock. The team assumed there was a correlation between the number of locked articles and conversions and wanted to test it.
Efforts to run A/B testing revealed issues that existed in how the paywall was set up, making it impossible to run tests once an article was locked. This was shocking for the business side, Frassa said, but the experiment was not a total loss.
“It was informative about our limitations and also helped us to set some priorities,” he said.
This prompted the team to do some research on the company’s historical data and the start rate and the amount of locked articles. This research revealed that the original hypothesis was true, but only in the early days of the 1-year-old paywall.
In the first month of the pandemic, the company locked more articles and saw an immediate increase in conversions. After that, the correlation was erratic. Even increasing the amount of content locked by 40% had no impact: “It didn’t move the needle.”
Frassa said this led to two key learnings:
- Improve the technical platform to facilitate A/B testing.
- Locked content is no longer a key conversion driver.
The company needs to focus on other things to improve the start rate, like optimising the way the newsroom decides what to lock. At the moment, it’s based on expert intuition.
After a year of looking at data and discussing conversion results, Frassa said, “we developed a feeling about what is right to put behind the paywall.”
Case study: Mediaprint in Austria
Mediaprint has also been focusing on reader conversion, Danielle Seifried-Jug, product manager of digital subscriptions, said. One experiment focused on redesigning its subscription offer displayed on premium articles in hopes of converting more users. The other aimed to address the meter start rate.
They then launched a test between the old and new template. They assumed the number of conversions would increase, but it did not. The old one still converted more users. There was an upside, though: “The average revenue per conversion increased a lot.”
The company originally showed several price points on its offer, and drew a hypothesis that users may be confused by this. The team cut out package feature points and eliminated some price points, while highlighting the recommended offer.
In the second experiment, Mediaprint increased the number of recommended articles at the bottom of select pages and only recommended premium articles. They found that the average number of days until a user subscribed decrease by 50%. The managing editors are now increasing the number of exposures of premium articles, leading to increased conversions.
So far, there are only about three people involved in experimentation. This makes it easy to move quickly and implement changes. Seifried-Jug said the team learned a key lesson during the GNI Subscriptions Lab: Keep it simple.
“The good thing with the approach we used during the Subscription Lab was we only focused on one thing and how we have that data.”