# Averages Lie! Bid Simulator and Incremental Marketing

My SEL piece for this month in case you missed it:

Suppose you run a banana stand. You buy pallets of bananas and sell them for \$1.00/banana to poor saps in the airport. You normally buy bananas from Farmer Jones for \$0.30 each. You have an additional \$0.60/banana in expenses for delivery costs, storage, staffing, airport booth rental, etc, leaving you with \$0.10 profit on each sale.

One day you go to Farmer Jones’ and find that she’s out of town and has left her business in the hands of her crazy cousin Jimmy. Jimmy says: “We only have 5 pallets of bananas left. I’ll sell you the first pallet at \$0.10/banana; I’ll sell you the second pallet at \$0.20/banana; I’ll sell you the third pallet at \$0.30/banana, the fourth for \$0.40/banana, and the last pallet will cost you \$0.50/each since it’s the last one.”

How many pallets of bananas would you buy?

An argument could be made that buying all five would produce an average cost of \$0.30/banana — the normal price — so that’s fine. Another argument could be made that you don’t make any money selling the 4th pallet, and would actually lose money buying and selling the 5th, so stopping at 3 pallets makes the most sense.

Presented this blatantly, it would be hard to pull the trigger on that fifth pallet.

We face exactly this same choice in paid search, but the numbers are a bit harder to see.

We all know that the value of traffic coming through paid search varies tremendously by keyword, by time of day, by season, by stock position, and many other factors depending on the vertical. RKG, Google, and others who have studied the data carefully have found that the value of traffic does not vary significantly by position on the page. {There will be a great outcry from those who believe certain positions are magic. All good, they’re welcomed to believe what they want, and maybe they have different types of clients than we’ve seen.} The people who click on an ad in position 10 are equally likely to ‘convert’ after they get to the site as people who click on an ad in position 2, 3, or whatever. There will be fewer visitors and fewer conversions in position 10, but the value of the traffic is the same. Hence, if the goal is to maximize conversions within some ROI constraint then the goal of bid management is to pay a price for those clicks that corresponds to that anticipated value and let the position fall where it may.

This places us squarely in the banana dilemma. The price of each visitor at the top of the page will be higher than the price of each visitor at the bottom of the page, and since the value of each visitor is about the same, it is quite analogous to buying bananas at different prices.

Bid Simulator data makes this point crystal clear.

We see that the total cost rises as the traffic volume increases, and the upward ‘bend’ in the graph is a function of the average cost per click increasing as the bids get higher.

What this graph doesn’t show clearly is the increase in the incremental cost per click, but it’s simple to find it in the data.

Let’s add two columns to the chart of data provided by bid simulator: 1) average cost per click at each bid; and 2) incremental cost per click at each bid, which is simply how much the additional clicks cost between that bid and the bid below it.

For those who prefer graphical presentations, below is a graph of average and incremental CPCs as a function of bid.

This data suggests that by bidding, say, \$0.62/click we’d likely have paid an average CPC of \$0.49/click for the previous week. But it also tells us that the additional 430 clicks obtained by bidding \$0.62 instead of \$0.50/click came at a price of \$0.81/click! The incremental CPC is actually higher than the Max CPC bid! This is not an anomaly, it is the norm.

How does this help us set bids?

The answer depends on your goals for paid search. If the goal is profit maximization then, as Hal Varian effectively demonstrated, the right approach is to set bids such that the incremental cost of traffic is equal to the value of traffic on that particular ad. If the value of traffic was \$1/click then profit maximization would recommend a bid of ~\$0.83/click. At that point the total value of traffic generated minus the total cost of that traffic is at its zenith (2790 clicks = \$2,790 in value, grabbed for a total cost of \$1,670 yields a profit of \$1,120).

Notice that the traffic difference between the \$0.83 bid and the \$0.75 bid is completely profitless, much like buying the 4th pallet of bananas from Jimmy. There are good arguments for buying it from a growth perspective, and Bradd Libby made a marvelous Machiavellian argument in favor of spending more rather than less as well.

If the goal of paid search involves more than profit maximization to include customer acquisition, top line growth, store support, and/or branding, this kind of marginal analysis is still tremendously valuable. By determining a ‘drop dead’ incremental loss beyond which you will not go, smart marketers can rationally spend money, whether that money is a fixed budget or spent freely.

Using Bid Simulator data at scale

Using Bid Simulator data at scale is challenging, but not impossible. By pulling that data into our systems we’re able to effectively build that incremental cost curve for each ad that has bid simulator data, and factor those curves into our bidding algorithm.

This needs to be done with some care:

• Bid Simulator data is only available for relatively high traffic, head keywords. The vast majority of keywords will have no bid simulator data, so the algorithm can’t depend on that piece of the puzzle. Moreover, using data about the head bid landscape to influence bids appropriately on the tail is not advisable in many situations.
• Bid Simulator data is a snapshot of the past. What happened last week does not necessarily reflect what will happen going forward. Indeed, if at some point using Bid Simulator data becomes the norm, advertisers may find themselves reacting to the same data signals as their competitors and end up creating a mess. Given the archaic state of bid management right now, I’m not too worried about this particular hypothetical.
• The data is limited in scope. We see only one piece of the bid landscape corresponding to seemingly random bid differentials above and below our average bid for the previous week. We haven’t identified rhyme or reason behind how far below our average bid or how high above the data will reflect, and it’s different for every ad.
• Extrapolating beyond the data provided is dangerous. By ‘building’ an equation of the line from the data table provided, it is easy to conjecture what the curve would look like beyond the information provided, but that’s dangerous business to say the least.
• Sophisticated campaign structures and bidding approaches can make using the data more challenging. Day-parting rules may impact bids quite significantly making the landscape provided out of scope for certain times during the week, and extrapolating is…well…dangerous if I haven’t mentioned it previously. Multiple versions of the same keyword (different match types, geo-targeting, syndication settings, etc.) mean decisions have to be made about whether it’s wise to apply what is true for one high traffic version to other versions.

Even with those warnings in mind, RKG went down this complicated path for a reason. Smart marketing means making decisions about how to spend the next marketing dollar not the average marketing dollar. Bid Simulator data is not perfect, but when coupled with sophisticated attribution management, it gives us the clearest picture of what that answer is in paid search.

19 Responses to “Averages Lie! Bid Simulator and Incremental Marketing”
1. Michael says:

Great post. It’s actually a basic economic principle that most SEMs seem to ignore – stop producing more when Marginal Cost equals Marginal Revenue. Even if there might be extra profits left over from less expensive clicks, it’s unprofitable to pay more than the additional clicks are worth. I realized this same principle a little while back, so I just made a simple spreadsheet that determined the incremental CPC and told me where to stop bidding. It’s made a big difference and the campaigns are leaking a lot less money. Thanks for the research!

2. Thanks Michael, some fundamentals of marketing never change, and the Law of Diminishing Marginal Returns is one of those. Glad you’re on board!

3. Dave says:

George,

Is this data available in the adwords API?

Dave

4. fazal mayar says:

good post george, a lot of sem service providers seem to not do things right and this hurts ROI a lot.

5. Hi Dave, yes as of last Summer Google started making it available over the API.

6. Dave says:

Thanks George, I missed that somehow. As usual great stuff in this blog by the way.

7. Thanks Dave, we hounded them to put the data out on the API. We actually asked them to give us the equation of the line as a polynomial instead of the data points, and then also give us some notion of how much the curve fluxes intra-week. Doubt that’s going to happen unless others start to ask for that as well.

George

8. Sameer says:

Great article George!! RKG always comes up with extremely relevant and insightful research work. This article took me back to my economics sessions during my college days. Equilibrium condition of Marginal Revenue being equal to Marginal Cost (represented as MR = MC). Never thought it would be so relevant in my work some day. Anyhow, thanks for the brilliant post. Looking forward to more from you.

9. Thanks for the kind words, Sameer. You’re right, many of the fundamental principals of economics and statistics have a keen bearing on paid search and direct marketing. There is a great deal of truth in those old textbooks :-)

10. Thanks for your comment, Fazal.

11. Rohan says:

Thanks for the great article (as always), George. You and your team really are next level SEM’s over at RKG. Not to be a cliche but I’m a long time reader first time question asker:

Based on your analysis of the incremental cost per click in Google’s bid simulator – have you seen any parallels in the way Google handles driving CPA transactions with the Conversion Optimizer? As you pointed out, in an efficient market – if each incremental impression & click available via the bid simulator data end up eroding the margins as you expand (which I agree with wholeheartedly) – I’m curious if Google’s mechanics on the CPA bidding (while it is never cut and dry) ends up buying on the aggregate and nullifying the benefits the CPA buy offers by blending in incrementally unprofitable actions, so long as the overall aggregate CPA goal is achieved.

I ask not out of laziness to do my own testing, but because (I’ll be presumptuous here) you probably have a level of granularity/transparency far in excess to what I’ll ever have access to for search.

That said, I’m 100% certain I did not write that question as clearly as I would have liked – but I had to try to ask. Also, I totally understand if you can’t respond because the info is proprietary or a trade secret, etc.

Thanks again for the RKG blog and your time,
Rohan

The short answer is: “I don’t know”, but that’s never stopped me from speculating :-)

I’d be willing to bet that Google does use bid simulator data in its own bid management efforts, but I’d also be willing to bet that they have to push the incremental CPA far beyond the target in order to hit the target. This notion of incremental vs average efficiency is pretty straight forward to you and me — we’re search geeks, we live in this space — but for the average advertiser this is complex stuff. If the average advertiser sets a target of \$8 for the CPA but Conversion Optimizer treats that as an Incremental threshold rather than the average to hit, the advertiser might end up with an average CPA quite a bit lower than that incremental value, say \$6. Those advertisers would likely be upset that Google isn’t advertising aggressively enough because they don’t understand what’s happening on the margins.

We do make use of eCPC in some instances, but don’t use Conversion Optimizer, so I can’t say for sure. However, if Google hits the efficiency targets as they are set, you can be certain that they’re spending quite a bit more than that on the margins.

13. Rohan says:

Thanks, George! You’re speculation is much appreciated and worth it’s weight in gold : ). You answered my question to the “T” and then some – thanks so much for your time.

Best,
Rohan

14. David Chung says:

Hi George,

Love your blog. I met you briefly at SMX 2011 if you recall (I won the search geek contest)… and spoke to you after a session. What are your thoughts on how eCPC interacts with bid simulator data? I heard in the past they were incompatible, and it certainly does a number on model extrapolation… also curious why you think it’s dangerous. ;) After using bid-sim based models for portfolio optimization the past 6 months or so, I’m starting to see that the ‘danger’ is mitigated if the campaigns are set up correctly per se.

My two cents on Conversion Optimizer is that it doesn’t take into account incremental CPA. I think it’s a purely heuristic based bid, dynamically adjusted using an algorithm that takes into account historical conversion data, such as geography of user, time of day, etc.

Looking forward to future chats.

-David

15. David, thanks for stopping by and for your kind words.

The challenge with eCPC comes from factoring in certain effects twice. We think that the bulk of eCPC changes are a function of time of day/day of week valuation differences, and syndication partner value differences. We already bake-in day-parting effects, and syndication network effects (syndication network in aggregate, not individually though we’d very much like the ability to do this granularly.) So the problem is we don’t want eCPC amplifying bids that have already been amplified or decremented to account for the same effects. We’d love to have a browser-only eCPC to play with :-)

Extrapolation beyond the data in bid simulator is inherently risky as it assumes the curve we see predicts the pieces of the curve we can’t see. So for example, guessing at the incremental CPC of a bid of \$1.40 or a bid of \$0.45 from the table above makes some very dangerous assumptions about the continuity of the landscape. Those assumptions may turn out to be okay, but they won’t always.