A classic mistake in paid search management is turning off the ads that "don't work."
I made the case previously that almost any keyword "works" if you pay the right amount for the traffic, acknowledging that the right amount might be pretty close to zero if the words are poorly targeted.
Nevertheless, many firms use the machete approach to management. A major retail client we just started working with this summer had all of their non-brand/ competitive search terms turned off by their agency because "they didn't work"! That kind of thing makes me wish there was some sort of PPC licensing board with the authority to strip licenses from those who don't know what they're doing.
But the mistakes can be less obvious and egregious than that example.
TWO COMMON ERRORS TO AVOID:
- The "Oracle of Delphi" error This mistake begins by sorting a phrase report by sales descending and adding up all the money spent on ads that didn't generate any sales. This is something you should never, never do. There is nothing inherently wrong with data -- it's just data -- but this "analysis" will lead to frustration 100% of the time, which can in turn lead to harmful reactions.
This is guaranteed to lead to frustration because, even if you're managing search brilliantly, for a big program with a long tail the amount of money "wasted" on "unproductive" keywords may be quite large over any given period, and no one wants to see that. The thing is, these low traffic ads are not unproductive and the money is not wasted.
It is very much analogous to sorting a cataloger's mail file by sales descending and noting to the Circulation Director that "98% of the catalogs you mailed didn't generate a sale!!! We can save all kinds of money if we just mail the buyers!!!"
Because of our friend, statistical noise, we don't know in advance which low traffic terms will generate a sale on very few clicks and which won't. Indeed, it's not uncommon for us to see something like 30% of ads that generated a sale during one month generated no sales the previous month. Cutting off the tail, as Alan demonstrated well a couple years ago, leads inexorably to "the death spiral".
- The "Yank the Rudder" errorAnother related mistake involves sorting that same report by cost descending and pulling back on higher traffic "head" terms that are above your efficiency limit.
On the surface, this seems like absolutely the right thing to do -- and it might be. But a couple things to assess before you do this:
- Do you have enough data on that ad to make a good judgment?
- Would an order here or there change your view of that ad's performance materially?
- If you lengthen the data collection window, does the performance of the ad look better? For example: the "Steinway piano" ad that generates 3 orders per year will look like an absolute dog in the periods between orders, but will look like it's been under-bid once that $20K order takes place. Ads with low conversion rates but high average order sizes can be particularly deceptive this way.
- If you shorten the data collection window does the performance look better? It could well be that the ad was inefficient at the beginning of the period in question but that the bids have already been adjusted to address that problem. Further pull back would simply cut sales efficiently generated.
A top-flight bid management system will handle these nuances correctly, but many folks don't have the luxury of working with such a system. As such, it's important not to overreact to any one spreadsheet.
That's not to say agencies and internal PPC managers should get a pass when the KW performance looks out-of-whack. Far from it. It's probably never okay to spend $1,000 on the KW "pencils" to generate $30 in sales on 6 orders over any period in time -- somebody dropped the ball in a big way, there. Just recognize that one data pull doesn't tell the whole story, and what looks like missed opportunity or wasteful spending could conceivably be a non-issue on further review. Those things that do look problematic should be investigated more fully before remedies are applied.