THE RKGBLOG

Interview with Lance Loveday

It was a real treat for me to meet Lance Loveday at Shop.org Annual last fall. He’s been a favorite columnist of mine for some time, and getting to know him since then has been a pleasure. Closed-Loop-Marketing specializes in conversion optimization services, and Lance’s team is on the leading edge of fine tuning the shopping experience.

Lance shares our “No Bull” approach to marketing his services and I’m glad to share this Q & A with our readers.

George: How do you know when your site isn’t converting as well as it could?

Lance Loveday

Lance Loveday

Lance: One potential sign that your conversion rate is putting you at a competitive disadvantage is if competitors are always showing up in the first 2-3 spots in the paid search results, but your ROI goals require you to show up lower on the page. While they could be spending blindly, it could also be a sign that competitors are seeing higher conversion rates. Because higher conversion = lower CPA = higher margin = higher CPA tolerance = higher bids = you get outranked.

But regardless of where you’re starting from, our approach is to never be satisfied, and always seek incremental improvement. Why settle for an average conversion rate when you could have an above average one – or an above average conversion rate when you could have an industry-leading one? The beauty of conversion optimization is that it’s a one-time cost with an ongoing benefit, so you don’t have to move the needle much on a conversion engagement to have a positive ROI when you have the benefit of time on your side.

We try to realistically model the projected outcomes of projects for clients to set expectations on the range of results they could expect to see. Although we try to be conservative with our estimates (better to under promise and over deliver) it can be a bit scary for us to put those numbers out there because that’s usually what becomes the benchmark for success for the project. But we like a challenge, and there’s nothing like the concreteness of a defined target to help bring focus to a project.

George: In assessing a site how should we think about the roles of KPIs vs competitive benchmarks vs user studies?

Lance: We like to use a combination of metrics to calculate success, as relying on too few metrics can lead to unintended consequences. Enron’s earnings, for example, looked great right up until they went under. So an effective dashboard should contain a balance of business metrics (revenue, transactions, margins), site metrics (conversion rate, bounce rate – by source), and user metrics (usability testing results, long-term survey response trends). We don’t put much stock in competitive conversion rate benchmarks, as variances in business models, traffic mix, and site strategy between sites almost always results in an apples-to-oranges comparison. A low-converting site can be very profitable, which is ultimately the most important competitive metric.
We also like to take a funnel view of the site as whole and measure throughput at various levels. Not all sites lend themselves to that kind of analysis, but it’s an incredibly powerful way of looking at your site if you have the capability. Taking that view allows us to model out the impact that small changes to a given page template can have on overall results.

George: If my website isn’t converting well am I better served by doing a complete re-design, or by taking an incremental test-driven approach to improve each template?

Lance: In a majority of cases it makes sense to take the more incremental approach. There’s almost always some upside to be found in every site. But we have worked with a few sites where the problems have been so structural that we’ve recommended a full-scale redesign.

George: How much can one raise the bar by moving the pixels around on the page? I know when we were in the design consulting business we often found it a challenge to measurably improve conversion rates unless the old design was really bad.

Lance: You’re touching on the core stumbling block we face as conversion consultants: Most people don’t believe that small interface tweaks (pushing pixels) can have a material impact on business results (more money). But a whole host of studies including this one and our own experience has shown otherwise.

For example, we just doubled conversion for a long-time lead generation client of ours, for whom we’d previously tripled conversion. So ultimately they increased lead volume by 6X with no loss of lead quality. Granted that didn’t happen overnight. But it’s a good example of the kind of game-changing impact you can have with a well-executed conversion optimization initiative. The average e-commerce site may not have that kind of upside potential, but it’s not uncommon for us to increase conversion by 30-50% on e-commerce sites.

George: Wow! 30% is a big lift!

Lance: One problem we’re seeing is that people are focusing so much on the sexiness of the new testing technologies that they are rushing to conduct some tests without ensuring they’re doing it properly – and then when they get mediocre results concluding that either a) there’s no room for improvement, or b) testing doesn’t work. But like anything, tests can be done poorly or done well. Doing it right means clearly identifying objectives, developing good testing plans, deploying the right kind of test method, testing user experiences (not just design tweaks) that have a high probability of success, ensuring statistically significant results and so on. It takes a lot of different skill sets all working together to do testing well – strategic, technical, creative, analytic and more. Sorry, I’ll get off the soapbox now.

Also, I prefer to be called a Site Whisperer. Pixel Pusher is just so demeaning.

George: Got it, “Site Whisperer” it is! :-)

George: Is there a particular piece of a website that is most often the stumbling block for conversion? Search results? Category pages? Product pages? Navigation? Shopping tools? The checkout process?

Lance: It can vary by site, but the checkout process is a common place to find very high dropoff rates. Depending on which stats you want to believe, the average shopping cart abandonment rate (across all industries) is 50-60%. My eyes get big any time I see a cart abandonment rate of 50% or higher, as that’s usually an indicator that we’re going to be able to make a huge difference for that client.

But we’ve seen plenty of examples where the category pages and/or product pages were the primary bottleneck as well.

George: What design flaw do you run across most often?

Lance: This is more of a strategy flaw than a design flaw, but I’ll go with requiring a separate login/registration step. It’s a real conversion killer from a user experience standpoint, and usually has such limited value to the site owner since users generally provide their email address anyhow later in the checkout/registration process.

George: How often do you find that platform constraints are ultimately the root of design problems?

Lance: More often than I’d like. But I’m really encouraged by some of the new testing platforms that enable us to bypass platform constraints. There are now some really elegant way to run tests on your site and increase conversion regardless of what platform you’re on. We’re running a test on a client’s shopping cart now that has required zero IT support or back-end changes.

George: How important is speed these days, and in your view is this mostly a “solved problem?”

Lance: NO WAY is this a solved problem. Load time is still a major conversion inhibitor for a number of sites – and its impact on conversion is growing as user expectations continue to evolve. This release from Akamai is about the link between site performance and customer satisfaction for financial services firms, but the lessons apply to all types of sites.

George: Any other general advice you can give?

Lance: Here’s my attempt at a Top 10 list:

  1. Start small. But start.
  2. Don’t test for the sake of testing. Test for impact and learning.
  3. Test hypotheses, not opinions.
  4. Don’t underestimate the power of persuasive design.
  5. Spend 90% of your time in planning/strategy/test design/crafting great designs, and then 10% of your time running the test.
  6. Test among good options.
  7. Don’t settle for the 5% gain if a 50% gain is possible.
  8. Be willing to be wrong.
  9. Get out of your comfort zone.
  10. Make testing an everyday thing.

George: Thanks so much for taking the time, Lance! Lance’s very well received book: Web Design for ROI is available at Amazon and other fine book stores!

Comments
4 Responses to “Interview with Lance Loveday”
  1. Jake Minturn says:

    Great interview!

    One thing I am curious about, and I’d love to get Lance’s take on this, is if these boosts in conversion tend to hold up after back testing.

    One thing I’ve seen when conducting MVT tests is that we’ll get a statistically significant winner that blows the doors off the original. After we’d test our winning combination against the control to verify results, only to see our winning not perform nearly as well.

    Thanks.

  2. I liked what he said: our approach is to never be satisfied, and always seek incremental improvement. This is absolutely true because once you get too comfortable then that’s the time your competitors will overrun you.

  3. Lance says:

    George – Thanks so much for the interview and the kind words.

    Jake – We have seen the gains from our tests hold up. But I am sensitive to the issue you bring up around initial gains tapering off after the test. I think that goes to my points about testing different user experiences and not just design tweaks, and ensuring you’re testing among good options.

    I don’t know enough to comment on your unique situation. But many of the MVT tests we’ve reviewed consisted of playing “mix and match” with minor variations of a few common elements. The problem is that minor changes to headlines, images and buttons are not usually going to be sufficient to dramatically impact the user experience. It reminds me of a great quote from Marissa Mayer from Google talking about testing: “This isn’t about shades of blue.”

    Certainly there are cases where minor design changes can yield a big impact, but that’s increasingly becoming the exception. The norm is that you have to test big changes to make a big impact. That’s why we like to start with A/B tests to find a page design template that works well before refining further with MVT. A lot of companies start with MVT, which is backwards IMO.

Trackbacks
Check out what others are saying...