Bidding Technology: Requirement #1
As search marketing becomes increasingly sophisticated, I thought it might be useful to throw out some “requirements” for any system meaningfully vying for supremacy in bidding technology. These will come out roughly weekly until I run out of ideas :-)
I’m speaking on this topic at SMX Advanced in Seattle next week, but I can’t go into much detail on that panel, so I’ll take the opportunity to flesh each point out here.
Bidding is second in importance only to an exhaustive, well thought-out keyword list for generating results in search. Contrary to what some would say, not all bidding systems are created equal, and the performance differences are material. If you’re in the market for a SEM let these “requirements” help guide your process.
Requirement #1: Proprietary System with Strong Statistical Foundation
You can’t be the leader in the space if you’re using someone else’s tool. Even if the tool you’re using is pretty good, the fact that you’re not elbow deep in the bowels of the thing means your flexibility is limited, your understanding of how it works is shallow, and you’re beholden to someone else for important new innovations.
Consider for example a tale of two keywords: Keyword 1 has a conversion rate of 5.0 % and an AOV of $30, for a sales per click value of $1.50. Keyword 2 has a conversion rate of 0.2 % and an AOV of $750. The sales per click value is identical, yet most bidding systems will have a hard time reaching that conclusion for keyword 2 consistently. Likely the calculated value will fluctuate wildly and its bids along with it.
We’re constantly making adjustments/improvements to how our system functions. This is a complicated game, the rules change, the objectives change, and we learn more and more as we go. If those changes don’t get folded into the system then you get trampled by the competition.
Some companies present their bidding system as “The Perfect Machine”, give it some cute trademarked name and tout it as the ultimate. I’m deeply suspicious of any firm that wouldn’t describe their system as a work in progress. Sure, we play the marketing game of throwing Alan’s PhD in Stats from MIT at folks and suggesting that our smart math guy is smarter than our competitor’s smart math guy, but the reality is that human behavior is hard to model, and what we’re talking about here is figuring out patterns in human behavior.
We’re still learning and innovating, but that’s what having a proprietary system is all about.
Stay tuned for more in the series, and swing by my panel discussion in Seattle if you get a chance!