At The Wall Street Journal, my team runs hundreds of digital customer-facing experiments, iteratively increasing the rate at which visitors become paying members and members do the things we know will make them happier and stay longer. In partnership with stakeholders across the business, we’ve delivered some notable results.

Here’s the first question I’m always asked: What role did audience segmentation or personalisation play in getting these results? The answer: very little.

While close partner John Wiley has found success looking for patterns in individual user data to predict behaviour, my team has found success pursuing a complementary strategy using data and empathy to inform one-size-fits-all changes to architecture, navigation, copy, and design. In the process, we’ve delivered exceptional value to the business and to members. We are hunting the lowest-hanging fruit.

Here are some illustrative examples.

Use overlooked benefits to nudge someone over the edge

Subscription agreements cause us all anxiety. Will we be punished for leaving our subscription early? Driven by this psychological insight, we tested a theory. We teased out something prospective members highly valued that we took for granted. By adding “You can cancel anytime,” we increased subscriptions by 10% while having no impact on tenure.

Specifically telling people they could cancel their subscriptions increased subscription rates.
Specifically telling people they could cancel their subscriptions increased subscription rates.

Tell people what you thought they already knew

We spent our first year regularly updating our article roadblocks (the marketing placements covering articles locked to non-members) with timely, topical messages like “New Politics. Know The Impact.” A straightforward message was surprisingly effective: “Continue reading your article with a WSJ membership” drove a 37% order uplift from articles. What was obvious to people like me who study these experiences every day was not to our readers.

Clever messages are not as effective as specific information.
Clever messages are not as effective as specific information.

The cumulative results? The collective impact of our acquisition testing has raised the subscription rate on WSJ.com by 64%.

Make sure the pipes are connected

The lowest-hanging fruit is usually hiding behind a thoughtful review of the end-to-end experience. Forget design or copy. Is there something architectural getting in the way?

Most of our new members convert on WSJ.com, and most of them do so on desktop computers. We discovered that getting members to use our app in addition to WSJ.com dramatically extends tenure. Unfortunately, there was no way to get them from their desktops to the app store. So we added a simple “link-texting” widget: They enter their mobile number, and we text them a link to our app. This nearly doubled the rate at which new members downloaded our app.

The WSJ shows why it is important to connect all platforms together.
The WSJ shows why it is important to connect all platforms together.

The cumulative results? Four times more members now download our app than when we began testing. Similarly, 2.5x more members now sign up for a newsletter than when we began testing.

The math behind low-hanging fruit

As you saw, cancellation messaging delivered a 10% order uplift on the shop page we use for all prospective members. But what if you took a different approach? Say you had a theory for how to sell WSJ specifically to readers of a specific section? Because the benefit of one section is more specific, it wouldn’t be hard to write more targeted copy: A special roadblock would lead to a special shop to a special checkout, reinforcing the value of the content type you know the reader likes at every step.

Here’s the problem: Since only 5% of traffic to the shop comes from that section, you’d have to get a 200% order uplift to deliver the same business benefit as the one-size-fits-all cancellation learning. That’s highly unlikely no matter how good your copy is.

An overall strategy versus personalisation yields better benefits.
An overall strategy versus personalisation yields better benefits.

Our simple process

  • Glean a data-driven insight from competitive analyses, previous tests, or performance data.
  • Develop a question. This becomes your hypothesis.
  • Develop answers. These become test variations.
  • Prioritise tests quantitatively using variations of the following equation:

($ value of KPI 
X
annual instances of the KPI 
X
expected uplift in the test)
-
est. $ value of the effort to build the test 
=
prioritization score

  • Sort all your tests or roadmap items from largest to smallest prioritisation score and do the one at the top first.

If your experimentation programme is in its infancy, put segments aside. The time will come. Look at your data, look at competitors, stare at your experience, and ask yourself: Is each component helping visitors do what you’d like them to do?

If you have been testing for a while and have started to run tests on smaller groups within your audience, ask yourself: Did I miss something?

Keep it simple for as long as you can, and pick the low-hanging fruit.