Let me start this post with a statement that should be uncontroversial to any serious-minded industry professional: DRTV marketing is a science. Not exclusively, but mostly. If you disagree, I dare you to launch the next product you like by spending $100,000 on media the first week.
No one with DR experience would take that risk. So, like good marketing scientists, we come up with our hypothesis (America needs our solution to X problem), conduct our experiment (by shooting a spot and testing a small amount of media) and make our decision based on our analysis of the results. We apply this scientific method to most things of significance that we do. However, in doing so it is possible to lull yourself into a false sense of security.
A case in point is an analysis we conducted recently of two call centers. The client started with Call Center A in January and then switched to Call Center B about mid-March. A few weeks later, we tried to answer the client's question of who had done a better job. Now, a properly designed scientific experiment would have pitted the two call centers against each other in an A/B split in real time, but we didn't have that luxury. We had to take the existing data and try to control the variables as best we could.
The first challenge was to decide what metrics were even relevant. We decided that only two metrics were within the call centers' control: The call-to-order conversion rate and average sale/revenue per order. In fact, we further reduced those two metrics to one metric called "revenue per call" (RPC). Like cost per call (CPC), it is calculated by dividing dollars by calls -- except you use sales dollars instead of spending dollars. This appeared to be a pure metric because if you send two call centers an equal amount of calls, all you really want to know is how well they converted those calls into dollar bills.
The result of the RPC analysis showed that Call Center A had outperformed Call Center B by $4. That's huge! It amounted to thousands of dollars over the average week. Both call centers had produced about the same average sale, but Call Center A had done a much better job of converting calls into customers. The right decision appeared to be switching back to Call Center A right away ... until we realized we had not properly controlled all of the variables.
It started when the client casually mentioned he had been out of stock for the last several weeks. As anyone who runs DRTV campaigns for a living knows, the bigger the back-order the more 'Where's my order?' calls you get. Unfortunately, a lot of people don't call the customer-service number to ask this question. They call the number they originally wrote down; i.e., the order number. And those calls, which cannot be converted to sales for obvious reasons, dilute the call-to-order conversion rate and, as a result, the RPC metric. In this case, it put Call Center B at a huge disadvantage.
It gets worse. Media spending had also varied significantly. It had remained somewhat consistent during Call Center A's tenure, but had jumped from hiatus levels to full roll-out levels during Call Center B's few weeks on the job. Because of issues like drag and media mix, this dramatic change in spending had also distorted the quality of the calls the new call center was receiving. (Significant changes in spending can also distort the average sale metric, which you know is true if you have ever looked at a media report and noted the variations in average sale by station. Apparently, viewers of some networks are more frugal than viewers of other networks. It may even be true that time of day affects frugality as well.)
As you've no doubt realized by now, these uncontrolled variables were fatal to our attempt at an analysis. We decided the only solution was to wait until the product was back in stock and spending had stabilized, give it a few weeks, and then try our analysis again.
The bottom line: Because of our bad science our client almost made a bad decision. It's possible that Call Center B, when the playing field is level, will end up outperforming Call Center A. Or, more likely, the two call centers will end up being about the same in performance. Yet if we hadn't controlled the variables, we would have disrupted a campaign and lost momentum re-issuing tapes for no good reason.
Ask yourself: How many important decisions are you making based on DR science? Is your DR science good DR science? As we discovered, that question needs to be answered carefully.
April 10, 2012
Good DR Science
Posted by Jordan Pine at 9:04 PM