THE RKGBLOG

Enhanced Campaigns: The New Bidding Challenges and Opportunities

The implications of Google’s new enhanced campaigns are far reaching. It is the most significant structural change to the AdWords platform since “Premium Placements” went away in early 2004.

Version 1.0 of Enhanced Campaigns should rightly be viewed as a step towards simplification for SMBs and less sophisticated paid search marketers, and a step backwards — blunting the available controls — for more sophisticated players.

The initial range of modifications (time of day/day of week, geography, and device type) don’t give us any levers we don’t currently have through campaign replication, and in fact take away some of the controls (tablet/desktop split, carriers, etc) that we currently have.

However, this architectural change makes possible the ability for Google to add more and more levers (screen-size of device, demographics, browser, connection type, velocity of mobile device, user’s past behavior, etc); we think that direction is both interesting and promising.

It also forces companies with proprietary technology, like RKG, to restructure their bid management systems in four fundamental ways.

1) Different data

Up to now, good bid management systems pulled in information from Google about ads that only Google could know precisely: numbers of clicks, impressions, costs, avg position, etc, and combined that with other information known about the ad: the keyword, the adgroup, the campaign, all the relevant campaign settings and, of course, associated conversion metrics.

RKG’s system was designed to allow our analysts to attach any number of other attributes an ad might possess shattering the rigid hierarchical data structure. For example: hierarchically structured systems don’t understand the connections between campaigns that might be closely related or even duplicates but targeted differently. Moreover, we can create thematic linkages connecting ads — like being a gift for a guy hence finding commonalities between drills and pizza ovens — that other systems can’t catch.

Our algorithms are designed to study all these attributes of an ad to get a more complete picture of what impacts traffic value in what ways, a critical element of addressing the problem of sparse data.

In the new world of Enhanced Campaigns, bid management platforms also will need data from Google about the context of each click. We’ve been able to glean some from click streams already: the user search, referring domain, device type, operating system, browser, IP address, and we’ve been able to use that to inform campaign structures to the extent that Google gave us the ability to target.

However, there are elements that Google knows that we can’t know that could — and hopefully will — be passed as well, like: geography beyond simple IP matching; screen-size since the distinction between a tablet and a phone is blurring; connection type; velocity of device, since users within half a mile of a store going 60 mph might be through traffic, where someone moving 2 mph is more likely to walk-in; past user behavior…the possibilities are endless and really interesting.

We’ll need to measure the impact each of these variables has on the value of traffic to the advertiser so that we can set modifiers for each variable exposed appropriately.

RKG’s system of flexible attributes lines up well with the new world order. We know all kinds of attributes associated with the keyword, and marrying that with information about how the context of the click impacts performance is right in our wheelhouse, since we already have applicable data modeling algorithms. Platforms wedded to account-campaign-adgroup hierarchy may have to effectively rebuild from scratch to do this well.

There are some important changes that have to be implemented by Google to make this happen. They have to append the Google Click ID to the URL so that we can connect their data about that click to our knowledge of what happened later. Our ability to connect to marketing touch paths and our client’s back-end systems to identify customer types, offline conversion metrics, lead valuations, returns, margins, etc are a critical piece. Google hasn’t built that piece just yet.

Moreover, Google will have to make all the necessary data AND controls available through the API so that the sophisticated players can play the game well. Google asking advertisers to trust them with conversion metrics and also trust them to spend the advertiser’s money in the optimal fashion would seem like a big leap, in addition to a potential anti-competitive practice in the eyes of the DOJ. We’re pretty confident that it will all play out the right way, but it’s important to note that where it is and where it needs to go are quite different at the moment.

2) Two levers instead of one

There is a second, quite interesting challenge here, though. We have to make bid adjustments in two places instead of one.

Let’s look at this in a greatly simplified system.

Google gives us a new lever to differentiate between left-handed and right-handed people (I’m hoping this is an absurd example, but it might not be). Simultaneously, they start identifying through the Click ID data, which users are thought to be left-hand vs right-handed so that we can figure out how that impacts both website conversion and post transaction behavior of those groups.

One group might convert better on site, but have a much higher return rate or lifetime value for some reason, and we’d need to be able to study all of that to make good bidding decisions.

We measure the average value of traffic for the keyword “foo bar” to be $3 and the advertiser is willing to spend 1/3 of that value on marketing. That gets us a base bid of $1.

We know that for the campaign associated with “foo bar” traffic from left-handed people is 30% more valuable than average while traffic from right-handed people converts at 30% worse than average.

We set our bid to $1, our left-handed adjustment slider to +30% and our right-handed adjustment to -30%.

Smarter bidding means we get more of the higher value traffic and less of the lower value traffic which is exactly what we wanted.

HOWEVER, changing the mixture means the average value of the traffic from that keyword has increased, resulting in a higher base bid. But that means the modifier for left-handed users is set too high, and the right-hand user modifier isn’t low enough.

If you don’t also adjust the modifiers continuously, the bids will increase beyond the threshold tolerated by the advertiser. Instead of setting a bid for an ad, we have to set the bid and reset the modifiers at the same time to deal with changes in the traffic value caused by the modifiers themselves.

Modifiers act both individually and in combinations, yet can only be set independently. This presents awkwardness in that some combinations may show dependencies that the independent amplifier model won’t address.

The fact that the modifiers live at the campaign level adds other challenges. The modifiers will have different impact on traffic volumes from different keywords making the blended average impact different for the campaign than it is for individual adgroups and keywords within them making targeting less precise than ideal.

3) Rethinking campaign structures

Paid search managers currently configure campaigns based on thematic similarities (category, subcategory, destination, whatever) and shared settings (network targeting, geography, device, yadda yadda). As data comes in, we might need to further refine this to include similarities in the way modifiers need to be set.

The keywords that used to be in the same campaign because of thematic and setting needs might now be separated because the traffic firing those ads behave very differently depending on — let’s say — the screen size; one converts badly on small tablets, the other converts better on small tablets. We won’t know until we see the data, but I can see this requiring some real re-thinking of how we create campaigns.

4) Bing by Itself

We will need to apply a completely different bidding algorithm to Bing data than we do to Google. Not only is the data different, the whole bidding mechanism is different forcing different types of data modeling.

Bing has followed Google architecturally and may do so here, too, but it may take a while for them to pivot unless they were already moving in the same direction. In the meantime, porting campaigns from Google to Bing will become problematic.

One could argue that the frictional cost of advertising on both Google and Bing will increase for SMBs and may cause less sophisticated folks to shave time from Bing management. Reduced competition could temporarily thwart some of their recent gains in market share.

What we need from Google:

  1. Connective material. Sophisticated advertisers need to be able to connect Google’s info about the click to their internal information about that user, and we need to have that info through the API from day 1.
  2. All the sliders. Geo, mobile vs desktop or tablet, time-of-day and day-of-week adjustment mechanisms are a start, but give us fewer controls than we currently have. Let’s not step backwards in targeting capabilities. By the mandatory cut-over date we want to see more to avoid losing hard won efficiencies and thereby reducing the amount of money we can spend efficiently.

Conclusion

Whether or not we like this change it is the new reality. The changes will significantly impact the way platform providers do what they do, the way paid search managers do their jobs and the way performance optimization happens…and don’t even get me started on other changes to how smart advertisers should be thinking about traffic value to begin with — that’s a topic for another day!

Exciting times!

Comments
20 Responses to “Enhanced Campaigns: The New Bidding Challenges and Opportunities”
  1. Terry Whalen says:

    George – I haven’t finished reading the post yet, but this is very well put(!)

    “Version 1.0 of Enhanced Campaigns should rightly be viewed as a step towards simplification for SMBs and less sophisticated paid search marketers, and a step backwards — blunting the available controls — for more sophisticated players.

    The initial range of modifications (time of day/day of week, geography, and device type) don’t give us any levers we don’t currently have through campaign replication, and in fact take away some of the controls (tablet/desktop split, carriers, etc) that we currently have.”

  2. Thanks Terry, I look forward to your thoughts. We do believe that in the long term this change was both necessary and a step in an exciting direction. The potential for more sophisticated targeting is huge and the campaign replication approach wasn’t a scalable answer to all the permutations possible. We’re hopeful for the future…and we’re hoping in particular that some of that future is realized before the mandatory cut-over!

  3. Thanks for sharing your unique perspective, George. I hadn’t really thought through the part about the two levers…

    To be honest, I have mixed feelings about this new model. I believe, for most smaller advertisers enhanced campaigns are great and offer new opportunities. Even though the separate bid multipliers force us into a world of independent layers, I think that this kind of thinking could improve the end result for many advertisers – not because this is a better model but because it works better with the tools that most advertisers have at their disposal.

    Many advertisers have followed Google’s advice to split out their campaigns into desktop and mobile. But bid management solutions available on the market treat the new mobile campaigns like fresh campaigns with no historic data to go on. So they start from scratch – even if there is a great deal of historic mobile data in the old campaigns. Over time, if there is enough data, this can work out, but in the beginning there is an investment to make.

    With enhanced campaigns, those systems will have to learn to look at segments and deal with bid adjustments. It’s not the perfect way to bid, but in many cases it should produce better results with sparse data.

    The way I see it, duplicated campaigns lead to a focus on individuality with no regard for commonalities while enhanced campaigns focus on commonalities and don’t allow for individuality. For most advertisers the question is, which model comes closer to reality.

    For advertisers who used to have the best of both worlds due to the ability to make connections and deal with commonalities on a large scale, this is of course a setback. On the other hand, from what you wrote about RKG’s abilities it sounds as if you will be in a unique position to rearrange things to make the best use of individual characteristics. I guess the game just goes on :)

  4. Martin, thank you for your thoughtful commentary; it’s always great to get your perspective. Your notion of commonality vs individuality is really well put. Sophisticated software can get the best of both worlds but only if we have controls that are sufficiently flexible. We can calculate what the targets should be, but Google has to give us the ability to address that granularity.

    Game on!

  5. Brian Bien says:

    Because Google has much to gain from maximizing the number of players in the AdWords game, simplifications that “level the playing field” will be in line with their goal. In Google’s ideal world, nobody has a competitive data advantage; Google is the omniscient provider of knowledge with no room for any middleman other than itself. Advanced features like click data provided via the API do not help to level the playing field. On the other hand, Google also has a conflicting interest in seeing that we understand the full value of our clicks to maximize our bids. Ultimately, I believe the interest in a level paying field trumps the latter.

    The ideal bidding tool would track dimensions not just across campaigns, but even across ad networks. It should look at both themes and settings as attributes (dimensions) that are independent of campaign structure, in order to make observations about data that would otherwise be too sparse when spread across multiple networks, further segmented by multiple campaigns. When campaigns are changed or merged due to Google’s changes, bid factors can be used to come up with the new expected values of keywords for the new campaign, based on previous knowns… even though the validity of this data will be imperfect in the context of the new campaign.

  6. George,
    I had an idea how to address the granularity in bidding mobile keywords. It’s maybe a bit crazy and certainly impractical for most advertisers, but maybe not for RKG.

    In theory you could build a system to circumvent mobile adjustments and implement separate mobile bids again. The idea is based on Googles preference for the keyword with the highest bid (or rather ad rank) if there is more than one eligible keyword. It goes like this:
    - one desktop capaign with mobile modifier -100%
    - one “mobile” campaign with mobile modifier +/-0%

    For the “mobile” campaign not to get any desktop traffic, its bids would have to be lower than the desktop campaign’s bids.

    To bid higher than in the desktop campaign, you could use a second mobile campaign with a bid adjustment of +300%. In that campaign, desktop bids couldn’t go any higher than in the original desktop campaign, but mobile bids could be up to four times as high.

    A downside here is that this would mean bidding in steps of $0.04. To further refine this you could add more mobile campaigns with adjustments for +100% and +200%, and maybe some more in between.

    Depending on what you need the mobile bid to be, the system could select the campaign with the most suitable bid modifier, set the bid and pause (or bid down, whichever is faster) the same keyword in the other mobile campaigns.

    Now I know that this isn’t an exact science and that Google doesn’t always go for the highest bidding keyword, even if ads are (and therefore QS should be) the same. So you probably couldn’t use the first mobile campaign to bid to 98% of desktop without collateral damage. But maybe up to 80% or 90%, or cover as much as possible with the other mobile campaigns (with adjustments from +100% upward). I guess you could never eliminate the overlap entirely, but in essence the system wouldn’t have to be perfect, just better than any other solution.

    Anyway, just an idea I wanted to share…

  7. Brian, thanks for chiming in. I really think Google benefits from a “both and” view. More competitors in the mix is a piece of the equation in that a densely packed auction is more profitable. That said, allowing Lands End to pay a premium to reach a user who is ready to become an LE customer while letting JCP pay a premium for someone more inclined to their brand should give them the best of both worlds. Online pure plays will pay more for online shoppers…brick and mortar chains might pay more for people disposed to not buying online.

    You’re right on target with respect to the power and use of dimensional analysis in bidding.

  8. Martin, you’re a sick, sick man. Why aren’t you working for RKG…yet?

    Yes we’ve been postulating ways to essentially beat the system in case Google doesn’t give us what we need. Your approach is quite ingenious. Ultimately, it would be a shame to have to resort to this type of complexity, and I’m hopeful that Google will see the benefits of giving us the info and controls we need to do this through their system rather than in spite of it.

  9. This is tempting, George. I’m working at a great agency right now, but if that ever changes RKG is probably the first place where I send my résumé :)

    By the way, are you going to SMX Seattle this year? My goal is to speak there (don’t know if that’s realistic..). Would be great to meet you in person!

  10. Look forward to seeing your resume :-)!

    I’m not currently planning to attend the Seattle show, but it isn’t out of the question. I’ll put in a good word for you with the speaker selection folks at SMX, you’d be great.

  11. Thank you George! This means a lot…

  12. Martin-
    I admire your evil genius ways. If you speak at SMX Adv, I’ll be in the second row. (I’m not nerdy enough for the first row.)

  13. Adam Audette Adam Audette says:

    Elizabeth, if you’re following this thread, you’re definitely plenty nerdy. :)

Trackbacks
Check out what others are saying...
  1. [...] As I described earlier, this change will actually add to the complexity of bidding in that instead of setting a bid for an ad that is associated with a keyword, match type and context, you have to set a bid for a keyword and then set multiplication factors for each different context. [...]

  2. [...] As I described earlier, this change will actually add to the complexity of bidding in that instead of setting a bid for an ad that is associated with a keyword, match type and context, you have to set a bid for a keyword and then set multiplication factors for each different context. [...]

  3. [...] Enhanced Campaigns: The Newbidding Challenges And Opportunities [...]

  4. [...] ideal bid adjustment varies depending on the relative volume on keywords within the campaign, which in turn is influenced by both bids and modifiers. This means that the [...]

  5. [...] Enhanced Campaigns: The New Bidding Challenges and Opportunities [...]

  6. [...] Enhanced Campaigns: The New Bidding Challenges and Opportunities [...]