It’s not a Bug it’s a Feature
My monthly Paid Search Column at Searchengineland in case you missed it:
Last year about this time we identified what we thought was a bug in Google’s ad serving algorithm.
We noticed that as we lowered bids on high traffic general terms that didn’t convert well, much more specific keyword ads started being served in their place. This had three annoying consequences:
- The more specific KW had a higher bid, hence we end up paying more for the traffic than it’s worth to us;
- The landing page is less targeted, so we’re taking bad traffic and landing it on the wrong page, making it even less valuable traffic; and
- Because of the poor quality traffic pouring in on what had been a high quality term, we bid that term down meaning we also get less of the high quality traffic that the term normally draws.
Our reps at Google at the time told us that this couldn’t happen, that the exact matched KW would always get precedence over the broad mis-match so what we were seeing…er…wasn’t happening…
We knew we were right about the phenomena and given their protestations that exact matches always won we suspected it was a mistake on Google’s part. We asked them if there was logic that makes exceptions to the exact match precedence if the ad is paused. They said “yes”. We then suggested that back in the day when there was a minimum bid, that an ad bid below that minimum might also be considered paused. They concurred. Then we suggested that when the minimum bid was replaced by the first page minimum bid perhaps the code wasn’t updated and any ad that didn’t meet that minimum would be treated as “paused”. They said at the time “That shouldn’t be the case; that isn’t what we intended; if it’s a mistake we’ll fix it.”
Googlers in good authority now tell me that that bug doesn’t exist — there’s no reference to the first page minimum in the code that drives the rankings. Instead the explanation is simply that the more specific KW must have a higher QS than the exact matched more general KW…or that its combination of bid and QS are higher than the exact matched term. We’re told that this shouldn’t really happen if the QS of the generic ad is good, but our data suggests otherwise. Just a cursory look at our data showed plenty of instances where an exact match ad with a QS of 10 was passed over for a broad matched ad with a higher bid.
Managing this self-competition with the current tool set is not just cumbersome, it’s impossible to do well. Adding every keyword as an exact matched negative for every other keyword in the account is unworkable. Bombing in all the general keywords as exact match negatives for all the more specific keywords is doable but time intensive and therefore costly.
Many advertisers simply give-up on broad match to prevent the shenanigans. We’d say that’s throwing out the baby with the bathwater, but understand the frustration. We do what we can, but anyone claiming to have this problem solved is either delusional or deceptive.
As I noted a couple of years back, one way Google could effectively self-destruct would be to go too far down the path of allowing higher bid/lower CTR KWs to take precedence over the right KWs. As described above, not only does spraying the traffic around unpredictably make all bidding systems less efficient and hence spend less money to reach the same efficiency target, the bigger danger is in alienating the shoppers who use the ads.
Advertisers bid ads down for good reason, like when inventory is thin, and if other ads take their place and continue drawing in traffic that won’t convert it is a disservice to the user as well as the advertiser.
Sometimes I rant around the office saying things like: “Advertisers should SUE! Here we’ve given Google instructions as to how much we’re willing to pay for people who type in “Foo Bar”, but when someone types in “Foo Bar” Google decides to serve my ad for “Left-handed steel foo bar” which has a much higher bid!!! That should be illegal!!!”
Google’s engineers genuinely believe they can algorithmically pick better ads to serve than the advertisers can. This may be true for badly managed accounts, but is not true for well-managed programs. If this notion that sometimes humans are smarter than the machines is offensive to engineers, perhaps it could be framed in the language of “crowd sourcing.”
If the engineering team is willing to acknowledge that some folks might actually choose ads, landing pages and bids rationally, there may be a profit maximization angle as well. Google is not “evil”, it is a publicly traded company looking to grow its top and bottom line just like us. My argument here isn’t that they can’t do this legally, nor is it that they shouldn’t do it ethically. The argument is that this isn’t a good business decision on their part.
Bing’s path to victory lies not in stealing Google’s organic traffic, but in taking Google’s shopping traffic. That’s what Cashback is about, and if Google places short term revenue maximization over long-term ad relevance they’re opening the door for Bing to step through.
If average users decide that “Google is great for research, but go to Bing for shopping” Microsoft’s big investment might just pay off.