THE RKGBLOG

A Few Questions for James Zolman

James Zolman’s “expose” on bidding automation has gotten a lot of attention. Congrats to him! I have to admit, the first time I read through it I was sputtering with anger, thinking he didn’t know what he was talking about. On subsequent review I get it; he’s not really talking about the proprietary tools RKG and other paid search agencies have, he’s talking about the bid management software currently available for license. But it seems like he’s only talking about the worst systems out there, so I’m left with some questions for James:

  • You make the point that many tools claiming to provide bid management automation actually require a human to pull the levers to make it work reasonably well, which, you argue, isn’t really automation at all. The last time I looked around I certainly saw many like you describe which did the “heavy” lifting of physically setting bids, but didn’t do much to calculate smart bids for the advertiser. However, have you checked out any of the better platforms? Certainly the better systems allow advertisers to set targets and walk away confident that the efficiency targets will be hit.
  • You then make the case that automated bid management isn’t even conceptually possible for anyone except the engines. That whole thread didn’t make any sense to me. You seemed to say that for bid management to work it has to test performance at each conceivable bid to figure out what works best for the advertiser. Why is that? A good bid management system measures (through fancy stats modeling) a best-estimate for the value of traffic from a given ad and sets the bid to the fraction of that value an advertiser is willing to spend. Where is the need for testing? Indeed, it is precisely because the value of traffic doesn’t depend on position that testing is a waste of money.
  • The portfolio theory folks will say you have to determine the bid landscape around each ad to find the right combination of bids to maximize ROI for a particular budget, and that is likely impossible to do at scale across hundreds of thousands of keywords. However, bid simulator data is most certainly useful for this purpose, and again, takes away much of the need for guessing at the landscape or testing to measure it. Are you familiar with Bid Simulator data?
  • You argue that bid management systems should react on a dime to changes in performance due to seasons, promotions, etc. Surely you’re familiar with statistical noise. Do you not agree that a system that pays undue attention to what happened yesterday is going to offer whipsaw bid management given the spiky nature of paid search at the keyword level or below?
  • You argue that full, hands-off automation is the goal. Why is that? We’d argue that knowledgeable, attentive analysts who realize that sweaters are on sale next week have a significant advantage over machines. The analyst can anticipate conversion rate changes and increase the bids calculated based on past performance by the appropriate fraction. Machines react late, missing opportunity at the beginning and wasting money after the sale or the season is over. Moreover, Google and the engines have no clue what inventory positions look like, or how those positions are likely to impact conversion rates.Certainly, analysts shouldn’t have to continually manipulate rules to try to hit an efficiency objective, that’s basic functionality. But letting the machine do everything is pretty far short of the goal.

    The analogy we’d make is that a car is a useful tool. It isn’t fully automated, but that’s actually a good thing. Those folks who’ve tried to build cars that drive themselves haven’t had a ton of success, and I suspect Danica Patrick would drive circles around the models that are fully automated. Powerful tools + smart analysts is the way to go.

  • You seem to argue that only the engines have access to PhD statisticians. Are you aware that RKG, EF, TSA, Kenshoo, Marin and certainly others all have PhDs on staff?

I don’t mean to disparage your piece. Many of your criticisms of the worst systems out there are spot-on. However, it does seem like a bit of a straw man argument. By trashing the worst systems you’re sort of implying that all systems fall victim to the same problems, which simply isn’t so.

Happy to post your comments!

Comments
6 Responses to “A Few Questions for James Zolman”
  1. James Zolman says:

    Hi George, Excellent questions! I barely popped in at the end of the day – I’m already working on a response. I just wanted to let you & your readers know I’m here and will be back w/ answers soon! Thanks! -james

  2. Thank you, James, for taking the time to respond, and for taking up the “challenge” in the spirit intended. We eagerly await your response!

  3. James Zolman says:

    Hi George,

    Thank you again for your questions!! I have fallen behind on my blogging by a couple weeks…I’m confident I have some of your questions addressed in a draft post I have just not settled on publishing yet. Also, I don’t know if you had a chance to read the first post in the series, but it might clear up some of your other questions too…especially the one regarding trends, potential daily optimization problems, etc.

    For the record, you are spot on with your assessment of my post. ;) The message is intended to educate and perhaps “call out”, in a way, those license based companies that advertise automation when there is a much more advanced, truer automated bid management software/solution that can and should use that “automatic bid optimization” language…

    I would argue that the post was entirely FOR RKG, EF and others…my post, in my mind, was supposed to heavily advocate for solutions like RKG. At the same time, I was trying to be somewhat even-handed by not mentioning any specific software in that post and that left it open to interpretation.

    +Have you checked out any of the better platforms?

    Yes – in fact, that’s what triggered my post. I personally was unaware of the type of technology RKG (and others) are developing or currently have until earlier this year…and I was frustrated that what added to my confusion was everyone claiming ‘automation’ when most are not – compared to their competitors.

    +Where is the need for testing?

    My answer does not do this question justice…but first, I agree with you that position does not matter. At the same time – I don’t agree that testing at every penny bid (within reason/expected outcome range) does not matter. One can increase their unique visitor value through ad copy (and on-site optimization) and each ad can have varying influence based on the keyword searched when that ad is displayed. Quality scores, click through rates and competitors enter the picture too and one must test at every possible bid to win out against the competitors who might/might not have the same quality score or similar predictive bidding technology. There are a lot of variables to consider…if I can consistently increase my unique visitor value, then I want to ensure I am receiving the lowest possible cost per click at the highest possible volume as long as I don’t exceed that unique visitor value. I need a solution that automatically grows with my consistent increase in unique visitor value.

    +Are you familiar with Bid Simulator data?

    I’ll admit that I am not familiar with the data/math behind bid simulation. In my opinion, bid simulation models are much more complex than the general paid search director/manager (i probably fit in this group…) should try tackling with manually created bid rules that generally operate on finite date-based rolling averages… :) I would rather let statisticians/software handle and adjust based on bid simulation models…I think that’s what I was trying to cover when I mention predictive modeling in my post(s).

    +Do you not agree that a system that pays undue attention to what happened yesterday is going to offer whipsaw bid management given the spiky nature of paid search at the keyword level or below?

    Yes, I agree.

    +Are you aware that RKG, EF, TSA, Kenshoo, Marin and certainly others all have PhDs on staff?

    Yes, I’m very aware of that. :) Perhaps I need to wrap up and publish my software post…because I certainly think the search engines offer valuable tools ONLY for those who want to tackle their management manually/themselves and have less than $100k monthly ad spend in general. I am actually fully in favor of and list RKG, EF and others on my list of truly automated solutions who have teams of PhDs on staff…I definitely did not intend my post(s) to be engine-centric. Quite the contrary, given their bias! :)

    Thoughts? Questions?

    Thanks again for this dialog – I quite enjoy it!

  4. James, thanks so much for the kind words and the detailed response.

    It sounds like our only point of minor disagreement is the testing piece. We’d argue that the landing page/copy testing coupled with match types, syndication settings etc. is independent of bidding. To the extent that these measurably impact value per click, the bidding will adjust accordingly and automatically :-).

    Bid Simulator is Google’s tool for revealing the volume/cost trade-off for higher traffic keywords, thus giving a picture of the bidding landscape. It is “only” a snap-shot of the landscape over the previous 7 days, so it doesn’t tell us what’s happening as of right now, but it’s still quite useful for gaming the “lower margin percentage but higher volume = more money” calculations associated with portfolio theory.

    Good stuff, sir, thanks for clarifying!

Trackbacks
Check out what others are saying...
  1. [...] This post was mentioned on Twitter by Rimm-Kaufman Group, Meta Keywords, Long Tail Tool Maven, SEM Software, Search Marketing and others. Search Marketing said: A Few Questions for James Zolman: James Zolman’s “expose” on bidding automation has gotten a lot of attention. Co… http://bit.ly/bQYUU1 [...]