Why volume-based campaigns are a flawed audience-building model

In the age of Big Data, news media companies are operating with an unprecedented amount of information on which to base critical business decisions. The ability to append demographics and lifestyle data and transactional history, unify online and offline data to achieve a “single customer view,” and create advanced segmentation models at an individual customer level are all very exciting. 

But in the end, we are still confronted with the challenge of aligning supply with demand to justify our marketing investments and effectively measure the return on those investments.

As marketers we have all asked or been asked the question: “Can’t you get me more customers?”

In other cases, it comes in the form of the statement “I need X new customers this month.”

Both are best addressed through very careful analytics and application of micro-economic theory.

Volume-based planning: a flawed model

A common mistake of marketers is to follow the simple, traditional, volume-driven approach (i.e. “I need X new customers this month”). In the campaign design, we skip the supply and demand consideration and jump to the conclusion – not because we did the math but because we were told to get a number.

“The campaign will yield a 1.2% response rate to get the 1,000 orders,” we say. Then we spend time on the backend of the campaign explaining why it only got a .8% response (it’s the time of year, the post office was late, the offered rate was too high, the creative was ineffective, and so on).

We need to get back to the basics. Sound planning, campaign design, and realistic forecasting are required up front.

Alternative: segment-based forecasting

Rather than design and measure results at a campaign, market, or product level, we should instead engineer them around many “micro-campaigns” within a larger campaign – each with its own segment and product response expectation. In this way, the quantity to include in the campaign is a function of each segment’s own expected yield based on its supply and demand.

The supply and demand for any product can be subdivided into curves for each segment. 

Segments can be modeled uniquely for a market or use commercial, such as Personicx or Nielsen PRIZM segmentation, and therefore an optimal sales level (equilibrium) for each demographic segment can be calculated. This is what we commonly refer to as “descriptive analytics.”

From a marketing perspective, the goal is to maximise sales, and, in many ways, to use marketing techniques to push demand beyond a natural equilibrium using discounts, vanity pitches, and so on, while using discounting and other measures on the incremental units. This is what we commonly refer to as “predictive analytics.”

From a micro-economics perspective, the goal is to find a sustainable equilibrium that balances costs, revenues, and demand in order to maximise profitability. This is what we commonly refer to as “prescriptive analytics.”

An optimal sales model and campaign design includes several components: customer lifetime value, price elasticity, and other factors such as marginal production/distribution costs per unit produced (supply side) and marginal utility (would I buy it) curves (demand side) to end up with a model that is highly tailored to the segments represented in a market.

Then and only then can creative, offers, and messaging be specifically tailored to achieve the desired outcome: a new customer is acquired that is likely to retain.

The chart below shows three curves over bars. The bars represent the number of households in each segment for the market. The curves are based on volume expectations above (or below) the current market penetration of each segment.

This model produces expected levels of overall market potential and ultimately how a campaign is designed in segment selection.

For example, in the above chart, the left most data point shows 2,000 households with approximately 40% penetration. Modeling suggests that there is up-side potential to 41%. A theoretical maximum penetration before diminishing returns overwhelms any volume changes. Looking across the chart to the right, several clusters show bigger growth potential than others – some close to 3% improvement. 

The risk in traditional campaign planning, beyond just hitting a number, is just looking at individual segment bars. Close to the middle is one segment that has close to 6,500 households, yet current sales penetration of about 8%.

Without proper analytics applied, the marketer is asked (or told): “If the left most bar/curve has 40% penetration, you sure are missing a lot of customer growth potential in that one sticking out like a sore thumb on the right. Now, fix it.”

Again, back to the economic theory: For each segment there is an optimal equilibrium. For the segment on the left, it is about 40%, for the example used; to the right, it is closer to 8%. The only way to move the needle as you move to the right on the chart is to make a change that is recognisable by the consumer (segment) to alter purchase behaviour.

Applying the microeconomics supply and demand equilibrium theory along with consumer segmentation systems to your audience marketing efforts can yield campaigns designed around very granular segments – each with its own expected return that, when added together, get to a realistic and attainable grand total.

The great news is this level of advanced analytics, segmentation, modeling, and forecasting is achievable for publishing companies of any size.

About Greg Bright

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.