THE RKGBLOG

Smart Bidding Requires Smart Clusters

My monthly column for Search Engine Land, in case you missed it.

Effective bidding systems set bids based on expected revenue, and do so at the most atomic level possible. We set bids at the ad level when there are multiple versions of a keyword running — representing different geographies, match-types, syndication settings, ad copy, landing pages, whatever — if we have enough data to distinguish between them. When the system doesn’t have statistically significant data at the most granular level it has to cluster ads together, aggregating data to predict the revenue generated from a click on any of those ads.

Since a typical campaign has few ads that generate statistically significant traffic within a reasonable window of time, the clustering mechanisms turn out to be enormously important in the overall performance of a PPC program.

The goal is to set bids for each cluster that best predict the value of traffic from each ad in the cluster, so the more closely related the ads are the better the model works.

So how do we know that keywords are related to each other? Furthermore, how do we know which relationships are actually predictive and which aren’t?

There are two ways to think about keyword relationships:

  1. The first and most commonly used is the taxonomy approach. Keywords are related to certain product categories, sub-categories, manufacturer brands, etc. The most closely related keywords are in the same adgroup, the most closely related adgroups share a campaign, and some go the extra mile to have multiple engine accounts, with each account holding campaigns that relate to one another.There are a number of problems with this approach.
    • First, by tying the analysis to the account structure you’re forced to pay a great deal of attention to which keywords go in which adgroups and how those adgroups are laid out (eg, is an adgroup defined by a sub-category and manufacturer brand or sub-category, manufacturer brand and gender? etc).
    • Creating new keywords becomes incredibly laborious because finding the right spot to put them takes time — it’s kind of like taking a random stack of books and having to put them in the right place on the library shelves (dating myself, here!).
    • Finally, because at most there are only three levels, the clusters mechanisms only have two or three levels. This is kind of like having dresses that only come in three sizes; they won’t fit like a tailor-made dress with the result being overspending on poor performers and missing opportunity on winners.
    • There is a more fundamental problem with the taxonomy approach, however: it’s rigid. Even if you had infinitely many layers, the connections between clusters are set in stone by the hierarchy. Say, for example, you sell Consumer Electronics. You might have a separate campaign for each product category, a separate adgroup within the Television Campaign for each manufacturer brand, but what do you do with the different types of Televisions (flat screen, lcd, plasma, projection, etc)? What about the different sizes?

      Since the adgroups define the ad copy, the right approach is split up the adgroups into small clusters, like “Sony flat screen – big”, “Sony flat screen — medium” etc. But now you have a different problem when you start doing data aggregation. If the adgroups are tightly defined, the next level up will encompass too much (All of TVs lumped together). You also lose the ability to cluster data by just manufacturer and sub-category. You don’t have the ability to track the performance of large, flat-screen TVs across all brands, which might all get a bump near the Superbowl, or just a particular manufacturer brand across all product categories.

  2. The right way to classify terms is not with a taxonomy, but with flexible attributes. Keywords in reality can fit into an infinite number of groups, and while the ad copy clusters might need to be one way, the analytics might suggest that other clusters are more predictive for bidding purposes. For an apparel retailer attributes of keywords might reflect: the type of clothing, gender, material, color, designer/manufacturer, style, discount related (“cheap”, “discount”, “outlet”) etc. For an electronics retailer in addition to the obvious, tagging keywords as sku specific or not can prove enormously informative for bidding.

    Splitting the account structure into micro adgroups doesn’t solve the problem. You need to be able to analyze performance across any dimension or combination of dimensions, and rigid hierarchies simply don’t permit this.

    Because the account structure is inadequate to capturing all attributes of keywords, that information must be databased separately by the PPC agency or your in-house team, in such a way that keywords can have any number of attributes, and attributes can be defined and created after the fact: “Oh, shoot, I’d like to tag seasonal offerings as a separate set of attributes so I can see at a glance how all my Valentine’s day related terms are performing, or how the subset of V-day discount terms are doing.”

With this detailed information about keywords an advanced bidding system can essentially learn which combination of attributes best define a close relationship, and how to set bids correctly on the middle to low traffic ads. The structure also allows smart analysts to easily tune bids up or down in anticipation of performance changes that the algorithm doesn’t know are coming: promotions, stock positions, co-op advertising dollars on certain brands, seasonality (birthstone months across different types of jewelry), etc.

Without attribute tagging, analytic power is limited, bidding algorithms have less information to use for clustering, and analysts have problems fine tuning to anticipate conversion rate changes. The details matter. Make sure your team has the right data structures in place to maximize campaign performance.

Technorati Tags: , ,

Comments
9 Responses to “Smart Bidding Requires Smart Clusters”
  1. Tom Demers says:

    Outstanding post. One of my major frustrations with a lot of bidding solutions I’ve touched is their inability to cull a significant sample size before making decisions. The idea of needing totally separate databases for Group structure/ad text and bidding is really interesting. You guys always produce really useful/thought provoking content, thanks!

  2. Thanks Tom!

    I agree, the tours I’ve taken of the available bid tools have left me scratching my head. One system talks about setting bids on KW that have at least 30 clicks in the last week — really?!? One week’s worth of data?!? 30 clicks?!? Most of our clients have a ~2% conversion rate. Using a 30 click standard will result in a ton of thrash bidding — totally irrational.

    The tools are certainly an improvement over manual bidding, but I don’t see how they can be used effectively as an enterprise solution.

  3. derek.newman says:

    Hi George,

    Thanks for more tantalizing details about the RKG bid methodology and system. I grok about 20% of it.

    Do you have QS score issues as you move keywords/ adgroups around? Don’t you loose history?

    The stats you use are way beyond me but I think I get why you store all of your decision making data in your in-house database.

    You wrote this some time back:
    “More importantly, from an analysis perspective, campaigns and adgroups play no critical role in our system. We database everything on our side, so analyzing data by product category, manufacturer, color, gender or some combination thereof is a breeze.”

    You also wrote about the 100/ 1000 click bucket system you use. And the brand keyword audit stuff was useful as well.

    Do you think a small advertiser can use elements of your approach – without the API and without rocket-science statistics?

    Thanks

    PS I’d love to see all your PPC management tips collated into a guidebook for new players. I know your clients are Top 500 companies but your approach IMO is sufficiently different to appeal to a broad audience of PPC practicioners.

  4. Derek, thanks so much for the kind words!

    We’ve kicked around the notion of a book, but haven’t found the time to make it happen.

    We definitely believe small players using spreadsheets benefit by applying these principles as best they can. Ryan Gibson wrote one of our most popular posts ever on Do-it-yourself-PPC.

    With respect to your QS issue, you’re right, this can be an issue. We tend to find that tightly clustered Adgroups do fairly well, and with important KW additions, stuffing them into the right adgroup can be worth the extra time so that the QS history of the adgroup can compensate for the lack of KW QS until it establishes its own history. This is more important for low traffic KW than high traffic, as the QS for high traffic terms rebounds very quickly.

  5. Bree Nguyen says:

    Great post – It’s hard to find good articles on Advanced SEM Tactics, but you definitely have ‘em. Thanks.

  6. Thanks Bree, we try! :-)

  7. Will says:

    Very interesting ideas presented here.

    I would love to hear how you guys work within the constraints of the engines’ structure. Do you just toss every keyword into one adgroup in one campaign, or do you go the opposite route and create an adgroup for every keyword?

  8. Thanks Will.

    As much as we’d love to blow off the engine’s structure — we’d love it if they adopted a flat hierarchy where each ad had its own ad text — unfortunately we can’t.

    In the first place the adgroup structure defines the ad copy, so you have to parse the groups wisely to create well-targeted ad copy. This helps Quality Score by improving the actual CTR and the estimated CTR of new terms added to the groups.

    We used to just create new campaigns with new launches of KW since that really doesn’t impact performance and saves some time, but we’ve started to follow the engine’s guidelines more just to head off the time-wasting discussions with folks where they tell us our account is a mess and we try to explain to them that because we database everything on our side it doesn’t matter. Ultimately it doesn’t take much more time to keep everything neat and orderly and it reassures our clients that we’re following “best practices” even when they’re a bit silly.

Trackbacks
Check out what others are saying...
  1. [...] believe making the portfolio model work at the highest level requires infinitely many well considered, thoughtfully assigned attributes. Simply aggregating data across a collection of attributes — eg: all the low traffic terms [...]