Every marketer wants to maximise the return on their investment. But finding the right or best combination of tactics to maximise response and lower cost per transaction can be a challenge with limited time and resources.

The perfect mix of audience, channel, message, and response options – in other words, the “sweet spot” – of any campaign is also a moving target. Add to this the time, discipline, and costs required to test and measure results from varying tactics, and it is easy to understand why time and budget-strapped marketers might not have the luxury of knowing what works best; more often, they’re limited to what they can glean from short-term results.

I remember a trip to my parents’ home shortly after our daughter started walking. After we arrived, she immediately took off to explore her grandparents’ home, with her grandfather following close behind.

I listened from the kitchen as I heard my dad saying things like, “be careful,” “don’t touch,” and “no, no.”

After we corralled my daughter, I remember Dad asking the question, “Why does she (insert behaviour here)?” My answer was quite simple. I told him that my daughter likely didn’t know why she did what she did, so it was unrealistic to expect me to understand – let alone explain the motivation for her behaviour.

Marketers have been trying to predict and explain consumer behaviour for decades, if not centuries. It’s natural to want to know why something works – or doesn’t work. The challenge is most marketers believe they don’t have the time or resources to test and measure campaign variables to isolate the tactics that deliver the best results.

Aristotle wasn’t talking specifically about marketing when he said, “The whole is greater than the sum of its parts,” but his statement is an accurate assessment of marketing campaigns. Take, for instance, a direct response campaign in which success is measured in terms of the number of orders received.

While we know successful direct marketing depends on list, meaningful offer, and a call to action, isolating each of those elements to provide an objective measure of success can take thought, time, and effort.

Many marketers might not know how to set up an objective test to measure the impact campaign variables have on final results. Here are some basics to consider, as well as some pitfalls to avoid when attempting to isolate the keys to campaign success.

First, let’s assume we can produce a list of “like” consumers for our test and target them randomly with our campaign variables. Let’s move on to where most marketers start: the creative presentation of the offer/message. Simple A/B testing involves testing dramatically different messages: art, copy, etc.

Multi-variate testing involves testing various combinations of creative elements – graphics, headline, subhead, and text – and requires adequate sample size to implement. The first step is to clearly identify what you are testing.

If you’re testing one list (audience segment) versus another, then you should send the same message to both lists, i.e. audience segments. If you are testing overall creative and strategy alternatives, then an A/B test should suffice. If you are attempting to isolate specific tactics and their contribution to campaign success, then multi-variate testing should be employed.

The deeper you delve into finding out what works and what doesn’t work, the greater the thought and effort required and the greater the benefits to your ongoing marketing efforts.

When it comes to direct response marketing, the testing and measurement journey is worth the destination of “improved results.” It sure beats “flying in the dark” or guessing what variables are contributing to your marketing success or failure.