A century ago, advertising pioneer Claude Hopkins published a seminal work titled Scientific Advertising. In it, Hopkins set down the core philosophy of a direct response marketer for the first time:
Every ad is surrounded by countless appeals. Every effort involves much expense. The man who wins out and survives does so only because of superior science and strategy.
Today, applying science to advertising is nothing new, but direct response marketers still stand alone in their ability to conduct measurable experiments. So does that mean we are properly utilizing our “superior science”?
Before I answer that question, let’s review the scientific method. According to Google, it’s a six-step process:
- Ask a question.
- Do background research.
- Construct a hypothesis.
- Test your hypothesis by doing an experiment.
- Analyze your data and draw a conclusion.
- Communicate your results.
As it happens, this lines up pretty well with the DRTV process:
- We ask a question: “Is this product likely to be a hit?”
- We do background research by reviewing sales and category information, studying DRTV history and using tools such as online surveys.
- We construct a hypothesis: “This is our next hit product.”
- We test that hypothesis by airing a two-minute experiment on select national cable networks.
- We analyze the data from that test (in the form of a media report), and we draw a conclusion (drop it or roll it out).
- Finally, we communicate our results internally (in the form of emails and meetings) and externally (in the form of media plans and sell sheets).
It’s all quite scientific. We should pat ourselves on the back. Except…
In the field the other day, I met with a client to analyze some test data we had collected for another type of experiment. Our hypothesis was that a particular pre-DRTV metric could help predict DRTV success. This is my personal Holy Grail, and I have been on this quest in various forms for the last decade. To be honest, it has been a frustrating experience at times, and on this particular day we were looking at yet another failed hypothesis.
This resulted in what you might expect—a lot of skepticism about ever finding our Grail. Then an interesting thing happened: We allowed our skepticism to extend beyond the scope of our project. Below are just two of the intriguing questions we asked that day.
Is a DRTV test even a good predictor of DRTV success?Go back to No. 4 of our “scientific process” and ask yourself: Have you ever dropped a product after it failed a TV test only to have someone else roll out with it? I have. In fact, I have been on both sides of the experience multiple times. One extreme example immediately comes to mind. Our team was facing a potential duel with a powerful player, so we agreed to cut a deal instead of going head-to-head. Our commercial had a great CPO. Their commercial, we learned later, did not. Not even close. Yet the products and offers were exactly the same, the commercials were both pretty solid and there was only a few weeks’ time between the tests.
The obvious explanation is there was some other variable at play we didn’t identify. But ask yourself: How often does an unknown variable lead you in the wrong direction?
Is DRTV success a good predictor of retail success?
One of the aphorisms of science is “correlation is not causation.” Just because many successful DRTV products go on to become successful retail products—in a highly promotional section of the store—doesn’t mean TV success causes retail success. Ask yourself: How many times have you seen a strong TV item turn into a weak retail item? And how many times have you seen the reverse?
Items with mediocre CPOs can sell surprisingly well at retail, and that raises yet another question: If we believe a certain amount of TV media is needed to support retail, how is that possible? Media spending is inversely proportional to CPO. Speaking of which, why is it that no two DRTV marketers will give you the same answer when you ask them how much media is needed to “drive” retail sales? Shouldn’t that be pretty well established by now?
Because of my admiration for Hopkins and his book, I named my consulting company SciMark. (SciAd didn’t sound as good.) But while “scientific marketing” started out as a statement, today it is more of a reminder. Because we usually can, we always should strive to question and test our assumptions. Too often, we accept a hypothesis without ever completing the rest of the scientific process.
Photo by patpitchaya/FreeDigitalPhotos.net
Jordan Pine is a consultant specializing in short-form DRTV and the author of The SciMark Report (scimark.blogspot.com).