The annual ritual of sitting down and calculating rate changes and new rate cards is approaching for many publishers.

A question to the data people: As the data person for your operation, are you involved? A question to the publishers: Are your data people involved? A question to everyone: Do you use the data you collect with every ad and contract signed to determine how to adjust the rates? Or, are you dusting off your trusty HP 12C calculator and adding a straight-line percent to the old rates?

I strongly suggest that this year is the perfect year for you to put a little data science into the rate exercise.

Before republishing rate cards as is, take a good look into whether advertising rates are set appropriately.
Before republishing rate cards as is, take a good look into whether advertising rates are set appropriately.

At a high level:

  • Are the rates realistic?
  • Are there rates on the card that are never used?
  • Do you have revenue (or inch) commitment levels that no advertiser will ever reach?
  • Do you have discounts starting too soon in the card?
  • Are the discounts progressive across the rate card?
  • Are you putting the rate change where it will yield value?

Data analytics will help you by applying facts into the process:

  • At what rates were ads actually sold?
  • Is the sweet spot in the rate card (actual sold-to rate) where you want it?
  • For revenue based agreements, is the effective rate-per-inch where you want it?
  • Are you seeing a day-to-day rate differences beyond daily/Sunday?
  • Are individual advertisers getting a consistent (or explainable) rate?
  • Can you build a “long-tail” graph from the data to see where the revenues are really coming from?
  • Can you build a “what if” model to look at the impact of moving a rate?
  • Are your contract levels commensurate with a business’s size (“getting your fair share”)?
  • Do you have enough historical data to build elasticity models?
  • Do you have enough external sources to understand market share?

Well, that is a starting point.

My point is that the information held in the data needs to be transformed into actionable wisdom. The job of the data scientist is to do that transformation. I spend my days in the terabytes of the data, but I can’t go to my boss and say “the data says …” I have to show him information that he can use. Trust me, talking in terms of standard deviations, elasticity, and r2 doesn’t work!

Turning advertising rate information into actionable bits will take time. The data will be a mess. Finding the simplest things (like how much revenue the weekly entertainment section produced) will take time. It is well worth the effort though.

Here is a quick example. The charts shown here show how two different segments (based on SIC Codes) track rate yield against a benchmark (a combination of about six factors). The deliverable was to show how one particular segment had a disconnect against the benchmark value. Showing the two charts produced the “aha” moment that a spreadsheet alone can’t show.

Using data to state a case can help yield the "aha" moment.
Using data to state a case can help yield the "aha" moment.

For the data scientists, as the rate analysis is done, focus on the deliverable. Remember the audience — it probably is someone a couple pay grades above the person you initially hand the results too.

Stay with the project. It is complicated, and the data will be a fair amount messier than you get from a circulation analysis. But stay with it. We need analytics to support our rate movements. Trust me, the advertisers aren’t going to give us more money if we do anything that looks arbitrary.

Enjoy the data dive into advertising rates. Watch your filters. Remember the audience, and Python and R are your new best friends.