THE RKGBLOG

Performance Based Pricing: A New Twist

RKG has been approached by a raft of relatively new vendors pitching a ‘new’ way to do website conversion optimization with dynamic creative and machine learning. The machine tests different creative, different layouts, different content elements based on an array of factors including the user’s own search query, the referring domain, the browser, etc.

While that type of technology has been around for some time, the pricing model being pitched is new to me at least. The notion is that the system will constantly optimize against a control (your unaltered web pages), and the advertiser only has to pay a percentage of the revenue on the demonstrated lift in conversion.

This appears to be a no risk proposition for the advertiser. But is it? Most performance based pricing models have hidden pitfalls that make the reality less advantageous than it appears on the surface.

Here are some concerns I’d have before buying:

  1. Latency: These technologies require a JS tag in the header of each page meaning the page won’t load until it has sent information to the software and gotten instructions back on how to render the page. This may slow the page load time by one or two tenths of a second. Maybe not a big deal, but with page load times impacting organic ranking and paid search Quality Score, in addition to impacting the user experience (albeit slightly), the performance lift may need to be greater than zero just to get you back to where you were.
  2. Score Keeping:The software runs control tests to determine the lift its generating and sends the advertiser a bill based on those results. Hmmmmmm. That raises the hairs on the back of my neck. The folks I’ve spoken with seem like straight shooters, not scam artists, but it would be SOOOOO easy to game this system. For example, it would be trivially easy to:
    • Build in selection bias. To do this accurately users must be assigned to groups (test or control) randomly. If the “test group” was comprised of all the brand traffic, or all the exact match traffic, or all the Google.com and Bing.com traffic, leaving the lower converting traffic to the “control group”, then a scam artist could artificially create the appearance of conversion lift without changing anything.
    • Conflate design tests with offer tests. Similar to the above, if part of the test messaging involves offer tests, the test results may simply confirm that offers increase conversion rates, and the advertiser ends up paying a commission for offers it could run on its own.
    • Vary the size of the control group maliciously. All measurements of conversion rates involve a certain amount of statistical noise. If the size of the control group isn’t fixed and agreed upon, a scam artist could essentially fix the size of the control based on when the statistical noise in the measurement is at or near a likely low point. Say the control normally produces a conversion rate of 1%. If it gets off to a slow (unlucky) start that day, they could simply close off the test cell and say “we only sent 2% of the traffic to the test today”. If the control group got off to a hot start, they could send a larger % of traffic to the control to bury the early positive results.

    In all likelihood, no legitimate company would engage in these kinds of fraudulent practices. It would expose them to significant legal risk, though in a world of VC funded businesses Built to Flip rather than built to last, those risks may be low.

  3. Redundancy: Many advertisers already use site search landing pages — like those of SLI Systems — that also use machine learning to serve the most effective product combinations and layouts. How different is this? Will they overlap? Will the advertiser end up paying twice for the same basic service?
  4. Lock-in: An objection could be made that the advertiser will become wedded to the software, because instead of paying once for improved designs and implementation, the advertiser keeps paying forever for the same effect. They can’t fire the software vendor, because they don’t own the designs.

Ultimately, the proof is in the pudding, and we believe this type of ongoing site testing based on results rather than guesswork and requiring no web design effort is quite interesting. Our friends in the website design community are less enthusiastic given the relatively narrow range of design elements that any automated system can manipulate, but I think there is plenty of room for good designers and good algorithms to coexist. What is best for the advertiser will win out in the end, and I firmly believe that smart, talented marketers will never be displaced. Not so smart, less talented marketers…that’s a different subject :-)

If this type of software really works, the pricing model may alleviate any concerns about latency, redundancy and lock-in. A small lift in conversion rates can make a huge impact on the performance and may wash aside all of the side-effects. But confidence that it really is working seems imperative.


Trust, but verify! We also advocate creating an auditing mechanism to make sure the lift assessments are being measured fairly. This can be done quite easily, and reputable software providers should JUMP at the opportunity to reduce an advertiser’s concerns. An audit could be simply a matter of the software vendor sending a feed consisting of two columns: the order confirmation number, and either a “T” or “C” indicating whether the order was associated with a test page or a control page. The advertiser can then pull data to see whether the control group’s pre-order behavior looks similar to the test group’s. This would expose any of the nefarious selection bias described above.

Have you tried this type of software? Did it work for you? Did you trust the results? Are there other concerns or considerations I’ve missed?

Comments are closed.