# Evaluating Incremental Opportunity to Generate Leads More Efficiently

In theory, determining the right bid for a high volume keyword in a lead gen paid search program given a specific cost per lead (CPL) constraint is a pretty basic mathematical exercise.   Our bid is simply our expected leads per click multiplied by our CPL target.  For example, if we aim to hit a CPL of \$100 and we are confident that a given keyword generates leads from 5% of clicks, our bid would be \$5.  Given a thousand clicks, we would expect to spend \$5000 and generate 50 leads at a CPL of \$100.  Unfortunately, the realities of the ad auction complicate this idealized picture, necessitating more sophisticated bidding methods to ensure optimal performance.

To start, the advertiser bidding \$5 per click will almost never end up actually paying that full amount.  In the AdWords auction, our CPC is determined by the competitor with the next highest AdRank, which is determined by Quality Score and bid.  As Google states, we pay “the lowest amount possible for the highest position you can get given your Quality Score and CPC bid.”  If there is a significant gap between our AdRank and that of the nearest competitor, our CPC can fall well below our bid.  Looking at some top terms for a typical lead generating RKG client, we find that our actual CPC comes in at about half of what our bid was on average.

Now, if we really can afford to spend X per click, but our CPCs are coming in at just X/2 per click, our observed CPL will be much lower than our target, suggesting that we’re leaving a lot of opportunity on the table.  Our first instinct might be to simply double our bids across the board and essentially hope for the best.  This move would be flawed for a couple reasons.  First, the gap between our bid and CPC is not likely to be consistent from keyword to keyword.  Here’s the actual breakdown from the example client:

Our observed CPC ranges from 41% of our bid for one keyword to 83% of our bid for another keyword.  If our goal is simply to get CPCs up to the level we believe we can afford, we wouldn’t want to treat these two cases equally.

More importantly, even for terms with very similar gaps in bids and CPCs, we can’t assume that their performance will be impacted equally by the same stimulus.  Imagine a term on exact match that appears almost exclusively in top position.  Increasing its bid will not bring it many additional impressions or clicks, but it may significantly increase click costs due to the nuances of the ad auction.  Another term, say one on broad match that typically appears in the middle of the ad listings, should have greater potential to drive additional traffic.  It’s clear that we need to start considering the marginal returns of our actions.

To anticipate the impact of a bidding change on a keyword’s performance, we need to have an accurate model of the competitive landscape that surrounds it in the auction.  Google’s Bid Simulator data is useful for just such a purpose and, in combination with our own historical performance data, we can start making much more informed bidding decisions.

Here’s an example of the type of data Google offers through its Bid Simulator:

The data in the simulation is typically a week old when it becomes available and it is more robust for high traffic terms.  For a lead gen site that is heavily dependent on head terms, actionable Bid Simulator data should be available for keywords generating roughly 30-50% of overall paid search traffic.  RKG automatically pulls Bid Simulator data via Google’s API on a daily basis and stores it in a database for historical analysis.

What you don’t see in this table is probably the most important piece: marginal cost per click (mCPC).  This is easily calculated, though, and it is effectively the slope of the line you see in the graph.  Like so many other aspects of business and marketing, paid search is subject to diminishing returns — the last dollar we spend on advertising delivers less of a return than the first –  and the ability to assess our marginal cost per click is a critical step in moving away from a model under which aggregate performance can mask inefficient returns on the edges.

Consider the Bid Simulator data above.  If we assume a 5% conversion rate at all levels, we get a plot of our overall and marginal CPL that looks like this:

If we believe that our conversion rate will suffer as we generate additional traffic, our marginal CPL will fare even worse.  Gaining this view into performance may be a big enough eye opener to cause us to reconsider our efficiency targets and how we set them.

Going back to our example keyword set, let’s take a look at how our marginal CPCs stack up to our bids for the individual terms.

While there seemed to be a pattern in the percentage of our bids we actually paid in the graph above, comparing our marginal CPC to our bid yields results that look random.  In all but a few cases, the mCPC is greater than our keyword bid (those points above 100%).  This is expected, but depending on the intent of our CPL target, this could mean we are actually losing money on some of the leads we are generating.  If we wanted our CPL to represent the maximum amount we could afford to spend to generate any single lead profitably, and our bid accurately captures that in aggregate, a marginal CPC above the bid means we are losing money in certain cases.

If that’s confusing, it may be more intuitive to start thinking of targets a little differently.  Basic economics tells us that we maximize our profits when our marginal costs equal our marginal revenue.  Our marginal cost in paid search is simply our marginal CPC, but for a lead generation program we need to develop a clearer notion of the revenue value of each lead.  When we started this piece, our hypothetical scenario was a program aiming for a \$100 CPL.  Let’s say that the \$100 level was chosen because each lead was determined to ultimately generate \$200 in revenue on average.

That’s a healthy profit margin baked in, but it still doesn’t guarantee that some leads aren’t being generated unprofitably or that we are generating the most profitable leads we can for a given marketing spend.  With a keyword that generates a lead on 5% of clicks, and leads worth \$200 in revenue, our revenue per click is \$10.  To maximize the profitability of our lead gen program, we should bid to the point that our marginal CPC hits that \$10 amount.  We may find that our average CPL varies from keyword to keyword, but if our models of the competitive landscape are accurate, we’ll have made significant gains over the one-target-fits-all approach.

Finally, while profit maximization is a lofty goal, it’s not necessarily the one that makes the most business sense for everyone.  Branding, offline support, and top-line growth and volume are all considerations that could drive us to invest some of our profits back into our online marketing.  In these cases, we may want to choose a marginal target that pushes past the profit max point.  Or, we could choose to essentially have two targets:  an aggregate CPL goal, but with a maximum marginal CPL limitation — “I want to aim for a \$100 CPL in aggregate, but I’m not willing to generate individual leads that cost me more than \$X.”  At RKG, we’re happy to say that our bidding systems are not only sophisticated, but exceptionally flexible as well.

Want to learn more about driving profitable leads through paid search?  Visit RKG at booth #238 at LeadsCon.

• Mark Ballard is Director of Research at Merkle | RKG.
• ##### Comments
9 Responses to “Evaluating Incremental Opportunity to Generate Leads More Efficiently”
1. Looks complicated. Why not just find a new, untapped channel that rivals the size and predictability of search but costs way less?

2. Terry Whalen says:

Chris, LOL – nice one! That, to me, is clearly the “right” answer here! I will email you – I want to know what this new, untapped channel is!

3. Mark Ballard says:

Chris, I’m intrigued, but I would say if you’re generating business profitably, it’s not a zero-sum game and you should be able to invest in a number of marketing channels. Best of luck with the new venture!

4. Terry Whalen says:

…yeah, what Mark said – it’s a win-win – everyone is a winner!

5. Terry Whalen says:

RKG + Mintigo ==> Everyone gets rich!!!

6. Marcelo says:

Good article! What profit maximization model do you recommend to follow for keywords where Bid Simulator data is not available? (due to low volume of clicks or simply for leads in the display network)

7. Mark Ballard says:

Great question, Marcelo. Even for high traffic terms, Bid Simulator data doesn’t provide an omniscient view of the ad auction for some of the reasons we’ve mentioned. So, we still need a layer of smart models and algorithms that can take and use Bid Simulator data when it is available and appropriate, but don’t require it to set sound profit maximizing bids. How firms bid on the tail is a key differentiator, so I’ll avoid specifics, but with with proper data aggregation across terms and even a simple model of diminishing returns for increased marketing spend, such as the square-root model, an advertiser can go a long way towards improving their profitability.

8. Marcelo says:

Thanks Mark for your answer, that is very useful.
One last question regarding the use of Bid Simulator data. Would you recommend using available bid simulator data for keywords with broad match type? I ask cause there might be big variations in the auctions we participate for one keyword in broad and the estimated value per click for those matching searches might be very different (due to variation in conversion rate for the different matching search queries). The question is also applicable for Phrase match type, but I would feel more under control than for broad keywords.

9. Mark Ballard says:

An important issue. Yes, Bid Simulator data is useful for broad and phrase match keywords, but to your point, we would expect the value of the traffic to respond differently to bid changes for terms utilizing different matchtypes. An exact match term on Google.com only isn’t going to see much variance in conversion rate, but a broad match term on the search network very well might. Bid Simulator will still be useful for determining the marginal cost side of broad terms, but you will need to be able to estimate the marginal conversion side as well.