Enhanced Campaigns: The New Bidding Challenges and Opportunities
The implications of Google’s new enhanced campaigns are far reaching. It is the most significant structural change to the AdWords platform since “Premium Placements” went away in early 2004.
Version 1.0 of Enhanced Campaigns should rightly be viewed as a step towards simplification for SMBs and less sophisticated paid search marketers, and a step backwards — blunting the available controls — for more sophisticated players.
The initial range of modifications (time of day/day of week, geography, and device type) don’t give us any levers we don’t currently have through campaign replication, and in fact take away some of the controls (tablet/desktop split, carriers, etc) that we currently have.
However, this architectural change makes possible the ability for Google to add more and more levers (screen-size of device, demographics, browser, connection type, velocity of mobile device, user’s past behavior, etc); we think that direction is both interesting and promising.
It also forces companies with proprietary technology, like RKG, to restructure their bid management systems in four fundamental ways.
1) Different data
Up to now, good bid management systems pulled in information from Google about ads that only Google could know precisely: numbers of clicks, impressions, costs, avg position, etc, and combined that with other information known about the ad: the keyword, the adgroup, the campaign, all the relevant campaign settings and, of course, associated conversion metrics.
RKG’s system was designed to allow our analysts to attach any number of other attributes an ad might possess shattering the rigid hierarchical data structure. For example: hierarchically structured systems don’t understand the connections between campaigns that might be closely related or even duplicates but targeted differently. Moreover, we can create thematic linkages connecting ads — like being a gift for a guy hence finding commonalities between drills and pizza ovens — that other systems can’t catch.
Our algorithms are designed to study all these attributes of an ad to get a more complete picture of what impacts traffic value in what ways, a critical element of addressing the problem of sparse data.
In the new world of Enhanced Campaigns, bid management platforms also will need data from Google about the context of each click. We’ve been able to glean some from click streams already: the user search, referring domain, device type, operating system, browser, IP address, and we’ve been able to use that to inform campaign structures to the extent that Google gave us the ability to target.
However, there are elements that Google knows that we can’t know that could — and hopefully will — be passed as well, like: geography beyond simple IP matching; screen-size since the distinction between a tablet and a phone is blurring; connection type; velocity of device, since users within half a mile of a store going 60 mph might be through traffic, where someone moving 2 mph is more likely to walk-in; past user behavior…the possibilities are endless and really interesting.
We’ll need to measure the impact each of these variables has on the value of traffic to the advertiser so that we can set modifiers for each variable exposed appropriately.
RKG’s system of flexible attributes lines up well with the new world order. We know all kinds of attributes associated with the keyword, and marrying that with information about how the context of the click impacts performance is right in our wheelhouse, since we already have applicable data modeling algorithms. Platforms wedded to account-campaign-adgroup hierarchy may have to effectively rebuild from scratch to do this well.
There are some important changes that have to be implemented by Google to make this happen. They have to append the Google Click ID to the URL so that we can connect their data about that click to our knowledge of what happened later. Our ability to connect to marketing touch paths and our client’s back-end systems to identify customer types, offline conversion metrics, lead valuations, returns, margins, etc are a critical piece. Google hasn’t built that piece just yet.
Moreover, Google will have to make all the necessary data AND controls available through the API so that the sophisticated players can play the game well. Google asking advertisers to trust them with conversion metrics and also trust them to spend the advertiser’s money in the optimal fashion would seem like a big leap, in addition to a potential anti-competitive practice in the eyes of the DOJ. We’re pretty confident that it will all play out the right way, but it’s important to note that where it is and where it needs to go are quite different at the moment.
2) Two levers instead of one
There is a second, quite interesting challenge here, though. We have to make bid adjustments in two places instead of one.
Let’s look at this in a greatly simplified system.
Google gives us a new lever to differentiate between left-handed and right-handed people (I’m hoping this is an absurd example, but it might not be). Simultaneously, they start identifying through the Click ID data, which users are thought to be left-hand vs right-handed so that we can figure out how that impacts both website conversion and post transaction behavior of those groups.
One group might convert better on site, but have a much higher return rate or lifetime value for some reason, and we’d need to be able to study all of that to make good bidding decisions.
We measure the average value of traffic for the keyword “foo bar” to be $3 and the advertiser is willing to spend 1/3 of that value on marketing. That gets us a base bid of $1.
We know that for the campaign associated with “foo bar” traffic from left-handed people is 30% more valuable than average while traffic from right-handed people converts at 30% worse than average.
We set our bid to $1, our left-handed adjustment slider to +30% and our right-handed adjustment to -30%.
Smarter bidding means we get more of the higher value traffic and less of the lower value traffic which is exactly what we wanted.
HOWEVER, changing the mixture means the average value of the traffic from that keyword has increased, resulting in a higher base bid. But that means the modifier for left-handed users is set too high, and the right-hand user modifier isn’t low enough.
If you don’t also adjust the modifiers continuously, the bids will increase beyond the threshold tolerated by the advertiser. Instead of setting a bid for an ad, we have to set the bid and reset the modifiers at the same time to deal with changes in the traffic value caused by the modifiers themselves.
Modifiers act both individually and in combinations, yet can only be set independently. This presents awkwardness in that some combinations may show dependencies that the independent amplifier model won’t address.
The fact that the modifiers live at the campaign level adds other challenges. The modifiers will have different impact on traffic volumes from different keywords making the blended average impact different for the campaign than it is for individual adgroups and keywords within them making targeting less precise than ideal.
3) Rethinking campaign structures
Paid search managers currently configure campaigns based on thematic similarities (category, subcategory, destination, whatever) and shared settings (network targeting, geography, device, yadda yadda). As data comes in, we might need to further refine this to include similarities in the way modifiers need to be set.
The keywords that used to be in the same campaign because of thematic and setting needs might now be separated because the traffic firing those ads behave very differently depending on — let’s say — the screen size; one converts badly on small tablets, the other converts better on small tablets. We won’t know until we see the data, but I can see this requiring some real re-thinking of how we create campaigns.
4) Bing by Itself
We will need to apply a completely different bidding algorithm to Bing data than we do to Google. Not only is the data different, the whole bidding mechanism is different forcing different types of data modeling.
Bing has followed Google architecturally and may do so here, too, but it may take a while for them to pivot unless they were already moving in the same direction. In the meantime, porting campaigns from Google to Bing will become problematic.
One could argue that the frictional cost of advertising on both Google and Bing will increase for SMBs and may cause less sophisticated folks to shave time from Bing management. Reduced competition could temporarily thwart some of their recent gains in market share.
What we need from Google:
- Connective material. Sophisticated advertisers need to be able to connect Google’s info about the click to their internal information about that user, and we need to have that info through the API from day 1.
- All the sliders. Geo, mobile vs desktop or tablet, time-of-day and day-of-week adjustment mechanisms are a start, but give us fewer controls than we currently have. Let’s not step backwards in targeting capabilities. By the mandatory cut-over date we want to see more to avoid losing hard won efficiencies and thereby reducing the amount of money we can spend efficiently.
Whether or not we like this change it is the new reality. The changes will significantly impact the way platform providers do what they do, the way paid search managers do their jobs and the way performance optimization happens…and don’t even get me started on other changes to how smart advertisers should be thinking about traffic value to begin with — that’s a topic for another day!