Sometime soon Forrester will release the latest version of their SEM agency rankings, and once again the Rimm-Kaufman Group will fail to make the cut. The reason? Forrester only ranks those agencies that offer both paid search and organic search optimization. They exclude specialists like RKG, didit and Efficient Frontier on the paid search side and Net Concepts, Audette Media and other fine specialty shops in SEO.
But even if they did include the specialists, on what basis would they make their judgments? Do they look at ad level data from each agency's clients to find out whether the programs are well-managed? Do they determine whether within that granular data the efficiency targets set by the clients have been reached? Do they aggregate the low traffic granular data to determine whether the "tail" is being managed properly? Do they evaluate the depth/coverage of KW lists, or the match between KW - landing page - and ad text across the portfolio? Do they find out whether under each agency's management competitive (aka non-brand) search has grown as a percentage of total site revenue? Do they look at any data at all? Would they even know how to evaluate that data if they did see it?
Client satisfaction surveys might say something, but we've certainly run into advertisers who are delighted with their current agency even though their program is demonstrably broken.
We've sometimes griped that when we take over management of an account the engines should wipe the account and domain Quality Score history clean and start with an assumption that the QS will be much better going forward given our agency's track-record.
But do the engines even know which agencies are good or bad? The engines could certainly speak to which agencies churn through clients. They probably have a sense of which agencies' analysts are consistently knowledgeable, and stick around for a good while. And, I suppose there might be some sort of average QS metric that could be measured across an agency's accounts that would provide some insight into how well they write copy -- not a great metric, as an agency writing "Free puppies with every order" may generate terrific Quality Scores while doing great disservice to their clients who don't offer free puppies. The engines could also track the number or percentage of an agency's client-facing staff that have passed a certification exam.
None of these are really adequate measures of excellence, though.
The engines suffer the same basic problem that Forrester suffers: absent granular conversion data, it's hard to evaluate a program or the agency that put it together.
As an agency we get to see all the gory details when we take over programs built by other agencies. We get to see the sometimes awful KW lists, ad copy and landing page choices. We see the in-artful use of match-types and negatives. We often get to see the granular performance data showing incompetent bid management -- recently, we took over an account where all the bids were in increments of 25 cents! :-) As a result we have a very good sense of which agencies are awful (some of them are rated highly every year by Forrester!). We see many of their former clients because they churn through them, and the programs are in consistently lousy shape. We wish there was a way to "out" them as a service to advertisers and other competent agencies, but haven't figured out a way to do so that wouldn't get us sued.
By the same token, we occasionally see programs that are in quite good shape, and form positive impressions of those agencies as a result. We don't see this often for the simple reason that advertisers whose programs are in good shape rarely shop around. We have favorable impressions of the agencies mentioned above in the paid search category, but not many others. There are sizable agencies whose programs we've never gotten to see. Likely, this means they're very good, but we just don't know for sure.
So, while we could put together a list of agencies to avoid at all costs, we'd be hard pressed to rank the best of our competitors simply because we don't get to look under the hood of their programs very often. I suspect the SEO firms would say the same thing.
If we can't rank each other, and the engines can't rank the agencies very well certainly no third party can do the job adequately. The level of depth that needs to be studied and the degree of expertise needed to reach the right conclusion simply isn't there.
It all comes back to reputation among clients, and the depth of knowledge of those clients. A reference who thinks their agency is great is one thing; references who knows their stuff and think their agency is great mean quite a bit more.