You would think this topic would be pretty simple. At 50,000 feet it is: Split the e-mail list into two halves and call it a day.
In the weeds of implementation, it is a whole different world. How big should the test group be: 50%, 20%, or something else? Should you test in an A/B mode or multi-variate? How much (or little) should you change between the A and B designs?
Should the design look somewhat similar, or is a radical variation ok? What criteria do you use to decide how to split the list into groups? How do you vary the pieces: Is one a true control, or do you just constantly try variations?
In e-mail campaigns, can you use opens/click thrus to dynamically pick the winner mid-campaign? How do you measure direct mail? Are there other campaigns happening at the same time to the same groups (telemarketing, for example)?
So much for simple.
Then when the campaign completes, if the test group sells 50 units, and the control group 43 units, the winner is easy to pick, right? Or is it?
To split or not to split
The short answer: Split every campaign related to sales and retention efforts. Every campaign is a chance to test and improve. People are fickle and will respond differently to ......[more]
28 September 2015 · By Greg Bright
This is a continuation of my previous post, which began exploring the difference between Big Data and lots of data.
Another way to approach the Big Data technology decision is to start with looking at your data and categorising it into what is and what is not Big Data.
It is not Big Data, in my opinion, if you are looking at customer account history, digital subscription access log summary data (device, OS version, user information), or payment history.
Nor is it predictive analytics; predictive analytics is a process applied to data (big or not), not the data in and of itself.
Likewise, if you are sitting on less than a dozen terabytes of data and it is growing at less than 20% a year and you are just running reports to understand what happened with your product, you don’t have Big Data (nor do you have a Big Data technology need).
You have lots of data, but not the kind of data or application need from the analytical discoveries that make the Big Data technologies ......[more]
23 September 2015 · By Greg Bright
It seems like every day I get a letter from a company telling me that I’ve got to get a “Big Data solution” installed or my company will fail. I also get “real world of Hadoop” e-mails – usually links to white papers on why everyone is adopting Hadoop and NoSQL solutions to solve their problems.
The phone calls come as well. Usually the calls are from big name IT firms. They invariably have the same message: They have a solution and consultants that will solve all of my problems by installing a Big Data solution.
I tend to like to play with the callers a bit, first to stay up with the latest technology, and secondly to see if the callers really understand my company or if it is just a cold call script.
I say something like: “What problems? Do you even know my business?” They usually don’t have a clue about my company, making their replies sound like the reading of a script loaded into their ......[more]
27 August 2015 · By Greg Bright
Mistakes happen. The goal with training, checks and balances, and defined procedures is to prevent as many as possible – hopefully all of them.
However, some things inevitably are not thought of, appear out of nowhere, and catch even the best organisation off guard.
To keep mistakes to a minimum, when helping companies launch direct marketing efforts, I give them a list of the top 10 things that will go wrong in hopes that they can avoid everything on the list. I’ve come to realise that many of the mistakes seem to be unavoidable, and each client will make them regardless of any advance warning.
The inside joke is that there are now more than 25 things on my top-10 list.
Between problems with dirty data coming in, mistakes in selections, broken software, or even problems with the type of ink in a printer, mistakes happen. And they result in everything from having a good chuckle (the AMEX logo next to the VISA check box) to a major financial issue (the entire 80,000-piece mailing sent to the newspaper’s office instead of the prospect’s because ......[more]
05 August 2015 · By Greg Bright
So did it work?
You’ve spent several weeks designing a marketing campaign. From designing the piece, setting up special rate codes, promotion codes, building a consistent cross-channel message, and scripting to the building out a highly refined selection criteria to select the lists used for the campaign.
Out into the mail the piece goes. Up go the run of site ads. The e-mails are sent and, finally, the telemarketing follow-up takes place. Three weeks later you send a second e-mail, then finally the campaign wraps up and the promotions on the Web site return to the standard copy.
Did it work? Which channel performed, which didn’t? What was the cost?
Monitoring response by various key indicators is critical. Having the monitors in place right from the very beginning is critical.
If you are a typical media operation, you probably have the next campaign set to launch right away. You are probably seeing questions from the executives in the organisation wondering about the campaign.
If it went right, can you do it again. And if it went poorly, what you (and the rest of the campaign design group) doing to on the expense side to ......[more]
12 July 2015 · By Greg Bright
My favourite words from the sales management after a campaign executes is “Do it again.”
It is always reassuring to hear that all of the work involved in producing a well-designed acquisition campaign brought in the results expected. The modeling worked, the selections worked, the size of the list was right, the data was well paired with the creative and offers presented to the recipients. Success!
And, my most feared words from sales management? “Do it again.”
What? Can the same statement be both loved and feared? Yes.
From the data perspective, it is impossible to “do it again.” The data has changed from the initial campaign run 60 days ago to today’s running. Those addresses that made the first run a success are now customers so they aren’t available in the next pass. A literal running “again” just picks all of the non-responders from the first pass.
Sure, some will respond, but nowhere near the first attempt. So, doing it again won’t make anyone ......[more]
15 June 2015 · By Greg Bright
In the age of Big Data, news media companies are operating with an unprecedented amount of information on which to base critical business decisions. The ability to append demographics and lifestyle data and transactional history, unify online and offline data to achieve a “single customer view,” and create advanced segmentation models at an individual customer level are all very exciting.
But in the end, we are still confronted with the challenge of aligning supply with demand to justify our marketing investments and effectively measure the return on those investments.
As marketers we have all asked or been asked the question: “Can’t you get me more customers?”
In other cases, it comes in the form of the statement “I need X new customers this month.”
Both are best addressed through very careful analytics and application of micro-economic theory.
Volume-based planning: a flawed model
A common mistake of marketers is to follow the simple, traditional, volume-driven approach (i.e. “I need X new customers this month”). In the campaign design, we skip the supply and demand consideration and jump to the conclusion – not because we did the math but because ......[more]
17 May 2015 · By Greg Doufas
The data scientist.
Has there ever been so much interest and demand in a job title?
Confusion, too. Who are these data scientists? What do they do? What qualifies them, and how do you find these people?
“A data scientist is a statistician who lives in San Francisco.” – Josh Wills
In the 1980s and 90s, it was data mining, database marketing, and other business intelligence skill sets that were all the buzz. These jobs required people who could manipulate and transform data through the use of popular data/database code (SQL) and statistical software (SAS, SPSS, etc.).
The data was typically structured and relational in nature. IT “owned” the databases and managed the administration required for the software being used to access and speak to the data.
Times were simpler back then. A database dictionary, a proper understanding of statistical methods, and some degree of certification (or at least advanced training) in statistical software ......[more]
16 February 2015 · By Greg Doufas
“Don’t let anyone just tell you what they value. Look at their budget and daily calendar and you can tell them what they value.”
This was some of the best advice I’ve ever been given.
It was early in my career. I was managing my first truly complex business intelligence programme, and I was struggling to map business objectives to an analytics roadmap. Even at executive levels, opinions about the right objectives to quantify and measure were all over the map.
Essentially, no one could agree on what our key performance metrics were. I was stuck and it was that advice that helped me most at the time: I needed to cut through the empty statements of intent and opinion and come to terms with the truth.
The first step was ......[more]
22 December 2014 · By John M. Lervik
Have you ever been reading a serious article about troubles in the Middle East, but somehow find yourself bombarded by irrelevant recommendations for articles about Justin Bieber or how to “lose weight fast?”
Does this feel off-putting to you? It should.
For publishers to build a loyal and engaged readership, they need to understand their own content inside out to offer users valuable and relevant recommendations that motivate them to engage, read, and share content.
Publishers must scratch their heads when they see traffic pouring into their site, only to see a staggering 75% of users leave after only one page view. This bounce rate reflects the short attention span of digital audiences, particularly on mobile, but also highlights the publisher’s inability to grab its target audience when they are already on its site.
Many publishers have made a half-hearted effort to engage users via social media channels. This may drive traffic to individual articles, but it’s rarely loyalty building.
More likely, it will be another “one-and-done” page view where a user clicks the link, goes to the recommended page, reads some of the article, and returns to where they came from without exploring the publisher’s site.
This kind of “a la carte” viewing is driving page view numbers, but ......[more]