A Few Questions for James Zolman
James Zolman’s “expose” on bidding automation has gotten a lot of attention. Congrats to him! I have to admit, the first time I read through it I was sputtering with anger, thinking he didn’t know what he was talking about. On subsequent review I get it; he’s not really talking about the proprietary tools RKG and other paid search agencies have, he’s talking about the bid management software currently available for license. But it seems like he’s only talking about the worst systems out there, so I’m left with some questions for James:
- You make the point that many tools claiming to provide bid management automation actually require a human to pull the levers to make it work reasonably well, which, you argue, isn’t really automation at all. The last time I looked around I certainly saw many like you describe which did the “heavy” lifting of physically setting bids, but didn’t do much to calculate smart bids for the advertiser. However, have you checked out any of the better platforms? Certainly the better systems allow advertisers to set targets and walk away confident that the efficiency targets will be hit.
- You then make the case that automated bid management isn’t even conceptually possible for anyone except the engines. That whole thread didn’t make any sense to me. You seemed to say that for bid management to work it has to test performance at each conceivable bid to figure out what works best for the advertiser. Why is that? A good bid management system measures (through fancy stats modeling) a best-estimate for the value of traffic from a given ad and sets the bid to the fraction of that value an advertiser is willing to spend. Where is the need for testing? Indeed, it is precisely because the value of traffic doesn’t depend on position that testing is a waste of money.
- The portfolio theory folks will say you have to determine the bid landscape around each ad to find the right combination of bids to maximize ROI for a particular budget, and that is likely impossible to do at scale across hundreds of thousands of keywords. However, bid simulator data is most certainly useful for this purpose, and again, takes away much of the need for guessing at the landscape or testing to measure it. Are you familiar with Bid Simulator data?
- You argue that bid management systems should react on a dime to changes in performance due to seasons, promotions, etc. Surely you’re familiar with statistical noise. Do you not agree that a system that pays undue attention to what happened yesterday is going to offer whipsaw bid management given the spiky nature of paid search at the keyword level or below?
- You argue that full, hands-off automation is the goal. Why is that? We’d argue that knowledgeable, attentive analysts who realize that sweaters are on sale next week have a significant advantage over machines. The analyst can anticipate conversion rate changes and increase the bids calculated based on past performance by the appropriate fraction. Machines react late, missing opportunity at the beginning and wasting money after the sale or the season is over. Moreover, Google and the engines have no clue what inventory positions look like, or how those positions are likely to impact conversion rates.Certainly, analysts shouldn’t have to continually manipulate rules to try to hit an efficiency objective, that’s basic functionality. But letting the machine do everything is pretty far short of the goal.
The analogy we’d make is that a car is a useful tool. It isn’t fully automated, but that’s actually a good thing. Those folks who’ve tried to build cars that drive themselves haven’t had a ton of success, and I suspect Danica Patrick would drive circles around the models that are fully automated. Powerful tools + smart analysts is the way to go.
- You seem to argue that only the engines have access to PhD statisticians. Are you aware that RKG, EF, TSA, Kenshoo, Marin and certainly others all have PhDs on staff?
I don’t mean to disparage your piece. Many of your criticisms of the worst systems out there are spot-on. However, it does seem like a bit of a straw man argument. By trashing the worst systems you’re sort of implying that all systems fall victim to the same problems, which simply isn’t so.
Happy to post your comments!