THE RKGBLOG

Attribution Myths vs Reality: Part 1) Statistical Limits

The “Big Data” revolution is at its root much more about the power of statistical modeling than it is about data volume. The incredible decrease in the cost of data storage and simultaneous increases in processing power and techniques have allowed industries, the NSA, and marketers to aggregate massive amounts of data stitched together from disparate sources in ways that allow statistics software to find interesting/unexpected correlations that are sometimes actionable.

That last sentence, properly unpacked, explains why statistical modeling is so valuable in auction-based advertising, and why it generally fails to answer the critical questions in attribution.

In the case of auction-based advertising we want statistics to provide a critical input: historically, what is a click or impression worth given the context (in paid search that might be the keyword; in display, the page and domain on which it’s served), the device, the geography, the time of day and day of week, the proximity of the person to a physical location, the user’s past behavior, etc.

Quality paid search platforms look beyond the keyword to understand characteristics of the keyword: categories and subcategories, brands, themes (like ‘discount’ or ‘promo’ or ‘seasonal’ terms), specificity, landing page types, length, etc. In the world of Enhanced Campaigns we can also look beyond the geography to characteristics of the geography: urban vs rural vs suburban, population density, average household income, etc.

We ask statistics to find correlations between characteristics and combinations of characteristics that are most predictive of traffic value and blend that with the data we have about the specific ad and tell us what this advertiser should be willing to spend on this user in this context. We marry that historical insight with current business intelligence that could affect the calculated value: news events, promotions, inventory levels, our current best understanding of the diminishing returns curve, etc.

Most importantly, this analysis gives us a clear-as-a-bell action to take that impacts the P&L: Bid X right now!. Well constructed programs with smart use of characteristic tagging and powerful algorithms will generate more revenue for the advertiser within their efficiency needs than will poorly tagged programs with low-rent algorithms.

Statistics identifies correlations well, and correlations tell us what we want and need to understand in auction-based advertising bidding decisions. Statistics matters when you can apply the results of those calculations to a real world problem and change outcomes as a result.

Attribution data all too often does not exhibit either of these traits.

The fact that someone who has first visited the site through a display ad and then comes to the site through a brand search ad converts better than someone who just visits through a display ad leads to what action? Bid more for brand search ads if the user has been to your site through a display ad? Oh, you’re already at the top of the page. Put a banner on the landing page for those visiting through a display ad that reads: “Please leave the website, go to your search bar and search for our brand name!” so that we increase conversions? We can’t really force people down a particular attribution path.

The problem here is that correlations do not answer the need in cross-channel attribution. We want to know which ads cause users to buy from us, to request more information, to sign up for a newsletter, to download an app, to engage with our brand in ways that make us money now or later. Correlations don’t necessarily tell us that.

From Wikipedia

Brand ads and brand organic listings often (but not always) fail to generate incremental value, but clicking on them is hugely correlated with success. Same for affiliate links particularly those affiliate ads served after the user has been on your website immediately prior to the affiliate interaction and immediately before the sale.

Ads to your Facebook fans, emails, and retargeted display and search ads will often be highly correlated with success for reasons that have little to do with causality. Your Facebook fans are your loyal customers, as are your email subscribers, as are some fraction of the people retargeted with display, and now, search ads.

Many past customers will buy from you again for reasons of satisfaction with your brand and unrelated to subsequent advertisement. This is not to say advertising to them is a bad idea, it’s a great idea. It’s just that you have to understand how much lift the advertising creates over expected repeat purchase rates. The correlation calculations cannot tease that out absent control testing, and will over-credit the influence of these channels left to their own devices.

What is also important to understand is that even when the model is smart and as well tuned as possible based on control test results (which we can do at RKG), in fact, even if you had a system that gave you 100% certainty of what each advertising vehicle does for you, it is often the case the number of levers that advertisers can pull to optimize media mix meaningfully is more limited than many realize, and there are often better ways of understanding and using those controls than what attribution provides.

More on that in Part 2: Limited Controls

Would love to get feedback from folks on this topics. Similar experiences? Different ones?

Comments
8 Responses to “Attribution Myths vs Reality: Part 1) Statistical Limits”
  1. Hugo Guzman says:

    Good stuff here, George. The one thing I’d point out is that attribution modeling can also be used to tell what NOT to bid on (with the statistical threshold to make such decisions with confidence).

    It’s sort of like turning attribution on it’s head.

  2. Hi Hugo, thanks for your comment. Could you describe this use-case a bit more? Is the argument that people bid beyond observed performance because they think a different attribution model would make the ROI look better, and attribution data helps them understand that the ROI stinks no matter how you do the attribution so they stop overspending? I’d certainly echo that argument (and will in the next part). I’m not making the case that the attribution data is completely useless, just that its value has been over-hyped to a degree.

  3. Tom says:

    Hi George,
    as per usual, an excellent and insightful post. Too often attribution models are seen as the new ‘answer’ to an increasingly more blurred and fragmented media mix. To make people understand, I normally use the analogy of shadows on a football field. different lights cast different shadows but none of them properly show the actual player. However you can infer important aspects of the game (e.g direction of play).
    I’m looking forward to your follow-up post.

  4. Hi Tom, thanks for the kind words and excellent metaphor. I may have to borrow that one, with proper attribution of course :-)

  5. Casey Carey says:

    George -

    An interesting post; it definitely raises some thought-provoking questions. I sounds like you are generally agreeing with the well-know statistician George Box who wrote, “Essentially, all models are wrong, but some are useful,” in his book on response surface methodology. In this case, attribution models are not exact, but they can definitely be useful in practice. I have seen significant improvement in marketing results (efficiency and effectiveness) when evaluated with advanced attribution models. I would also add that any attribution model considered to be “advanced” would incorporate validation through holdout test to ensure “it is in the ballpark.”

    Looking forward to reading Part II.

    Casey

  6. Alex Freeman says:

    George,
    Great post! This link has some other great ‘correlations’ that have no causation. http://www.buzzfeed.com/kjh2110/the-10-most-bizarre-correlations.

  7. Thanks for your comment, Casey. The utility of attribution often depends on how bad the current attribution is. Companies that have no attribution system and are therefore double crediting orders all over the place can learn that they’re overspending in ways they can easily address. Similarly, we had a client that gave last click attribution credit, but included direct load traffic as a marketing touch, so basically marketing programs could only get credit for same session transactions. They were greatly underspending as a result. However, at least in the ecommerce sector, when clients have non-stupid, simple models, the actionable variance between that and sophisticated models is usually quite small, and there are often better mechanisms for getting at the “real” ROI of the controllable marketing channels than attribution systems.

  8. Thanks Alex, those are good!