THE RKGBLOG

The Studies You Don’t See

We’ve all seen case studies and white papers showing impressive results of all manner of activities from changing from vendor A to vendor B, to new software product X, to measuring offline impact of online marketing, to…anything you can think of.

I’ve never had much use for case studies because they mean so little and they’re so easily ‘cooked.’ Until recently we didn’t have any on our website or in our sales pitches because I figured that everyone viewed them the way I did. Our sales team has taught me that they have great value in the sales process and are therefore important. People want to see concrete evidence of results, and even if they get the questionable value of case studies as a class, they may need something to get the bosses to sign off.

4 reasons I’ve never put stock in case studies:

  1. Large improvements can be made by either doing great work to raise the bar, or by doing mediocre work on a badly broken program. The lower the bar the better the case study, but does that really tell us anything?
  2. Often times the impressive results may be attributable to factors not mentioned in the study. Advertisers may dramatically change their product or service offerings, their marketing objectives and ROI needs, their methodology for measuring success, or other changes in the marketplace any of which can create a totally different result set having nothing to do with the “before/after” comparison presented.
  3. The study methodology can be grievously flawed making the results meaningless. Sadly, this nevertheless proves interesting information to folks who don’t look at or think critically about study methodologies.
  4. Interested parties often bury the bad results. Few companies are willing to publish the results of studies that run counter to the story they’d like to tell.

This is a riff on this last point.


Physics students are taught that experiments that aren’t repeatable are more or less worthless. This discipline is a key piece of the scientific method. But physics is comparatively simple. A number of folks may flip out at that statement, but hear me out: physics is a tough subject for students precisely because it’s relatively straight-forward and we know a great deal. That means that there are wrong answers and that no amount of hand waving can make them right.

Psychology is infinitely more complex. We can figure out with keen precision what will happen when two pool balls collide, but predicting how two people will interact when thrown together is impossible. This complexity creates a great deal of wiggle room for folks to wave their hands.

Medicine is also impossibly complex. Some patients won’t get better even taking a good drug; some patients will get better regardless of treatment. In drug trials, it is understood that what works for some won’t work for all and that studies must use control groups to see what lift in positive outcomes can be associated with the drug being tested as distinct from any placebo effect. If scientists ignore some of the bad outcomes in the test group they are cooking the study in a way that creates misleading results.

Not only do the results become meaningless, the scientists have lost the opportunity to learn something. Why doesn’t this drug work for some patients? Is there something in common among the failures? Why does it work in mice but not in humans? Exploring these questions can be the catalyst to important breakthroughs.

In online marketing you will never see a case study that shows “this new technology didn’t help this client at all.” We have on occasion participated in studies that didn’t produce the results the study partner wanted to see and, to no one’s surprise, those study results weren’t published.

We try to be different at RKG. We’ve done lots of tests that didn’t go the way we expected, and didn’t produce the results that would help us increase our revenue. We ‘publish’ the results anyway when we think there is something to learn from the study. Trying to learn why something works and under what conditions, for what types of advertisers, etc., helps us figure out what to recommend to which clients. We’re gaining market share in our industry because we take the long view of client relationships.

Google would benefit from taking the long view here, as well. Google is hot to get advertisers to share data sufficient to show the connection between online advertising and offline revenue. That’s fantastic!!! It is great to have the kind of resources Google can bring to the table as these studies are challenging to do well.

Google would be wise to publish the results of not just the success stories but the failures as well. Showing that “for this type of company we haven’t seen much success, whereas for these other types we have” would make for a much more compelling sales pitch to an advertiser than would “burying the bodies” and only talking about successes. Folding in offline marketing efforts to help advertisers understand what offline media does to drive online results would be an even bigger, bolder and more transparent step in the right direction.

In the absence of scientific integrity, the Google sales pitch that “your Google ads drive tons of business offline” will sound no more compelling than every other sales pitch CMOs hear.

If we truly believe that advertisers, acting with perfect knowledge of what’s driving their business, will spend more money online than they are now, then the truth will set us free. Hiding the bodies only serves to convince CMOs that the engines are just another in a long line of sales people trying to get them to cough up more money.

Comments
2 Responses to “The Studies You Don’t See”
  1. Your post hits on a very important point: the value of negative results. In my role as webmaster at an academic medical center, I got to create a Web site for kids with asthma as part of a research study. The first thing the doctor in charge told me was that “no, it doesn’t work” is a very acceptable answer, and not to be discouraged if the study didn’t turn out as we hoped it would. He explained that our “no” would help the next person find a better way to “yes,” and save time and money along the way.

    Kudos to you for being brave enough to publish ALL the results. Because all of the data can help someone, somewhere.

  2. Catena Creations, thanks for your thoughts and great example. Unfortunately, many folks take the short term view of client relationships. If they can sell an existing client on trying something — even something they know won’t help them — they make money. If they can sell a prospective client using deceptive means they make money. We think in the long run client retention is the key to business growth, and that focusing strictly on what is good for the client is the way to achieve client retention. “This won’t work for you” is not an uncommon phrase around here.