Skip to content

Cyber Sale: Save big on Moz Pro! Sign up by Dec. 6, 2024

Advanced seo 74da3a3

What Happens When SEO and CRO Conflict?

Will Critchlow

The author's views are entirely their own (excluding the unlikely event of hypnosis) and may not always reflect the views of Moz.

Table of Contents

Will Critchlow

What Happens When SEO and CRO Conflict?

The author's views are entirely their own (excluding the unlikely event of hypnosis) and may not always reflect the views of Moz.

Much has been written and spoken about the interplay of SEO and CRO, and there are a lot of reasons why, in theory, both ought to be working towards a shared goal. Whether it's simple pragmatism of the business benefit of increasing total number of conversions, or higher-minded pursuits such as the ideal of Google seeking to reward the best user experiences, we have many things that should bring us together.

In practice, though, it’s rarely that simple or that unified. How much effort do the practitioners of each put in to ensure that they are working towards the true shared common goal of the greatest number of conversions?

In asking around, I've found that many SEOs do worry about their changes hurting conversion rates, but few actively mitigate that risk. Interestingly, my conversations with CRO experts show that they also often worry about SEOs’ work impacting negatively on conversion rates.

Neither side weights as highly the risks that conversion-oriented changes could hurt organic search performance, but our experiences show that both are real risks.

So how should we mitigate these risks? How should we work together?

But first, some evidence

There are certainly some SEO-centric changes that have a very low risk of having a negative impact on conversion rates for visitors from other channels. If you think about changing meta information, for example, much of that is invisible to users on the page—- maybe that is pure SEO:

And then on the flip side, there are clearly CRO changes that don’t have any impact on your organic search performance. Anything you do on non-indexed pages, for example, can’t change your rankings. Think about work done within a checkout process or within a login area. Google simply isn’t seeing those changes:

But everything else has a potential impact on both, and our experience has been showing us that the theoretical risk is absolutely real. We have definitely seen SEO changes that have changed conversion rates, and have experience of major CRO-centered changes that have had dramatic impacts on search performance (but more on that later). The point is, there’s a ton of stuff in the intersection of both SEO and CRO:

So throughout this post, I’ve talked about our experiences, and work we have done that has shown various impacts in different directions, from conversion rate-centric changes that change search performance and vice versa. How are we seeing all this?

Well, testing has been a central part of conversion rate work essentially since the field began, and we've been doing a lot of work in recent years on SEO A/B testing as well. At our recent London conference, we announced that we have been building out new features in our testing platform to enable what we are calling full funnel testing which looks simultaneously at the impact of a single change on conversion rates, and on search performance:

If you’re interested in the technical details of how we do the testing, you can read more about the setup of a full funnel test here. (Thanks to my colleagues Craig Bradford and Tom Anthony for concepts and diagrams that appear throughout this post).

But what I really want to talk about today is the mixed objectives of CRO and SEO, and what happens if you fail to look closely at the impact of both together. First: some pure CRO.

An example CRO scenario: The business impact of conversion rate testing

In the example that follows, we look at the impact on an example business of a series of conversion rate tests conducted throughout a year, and see the revenue uplift we might expect as a result of rolling out winning tests, and turning off null and negative ones. We compare the revenue we might achieve with the revenue we would have expected without testing. The example is a little simplified but it serves to prove our point.

We start on a high with a winning test in our first month:

After starting on a high, our example continues through a bad strong — a null test (no confident result in either direction) followed by three losers. We turn off each of these four so none of them have an actual impact on future months’ revenue:

Let’s continue something similar out through the end of the year. Over the course of this example year, we see 3 months with winning tests, and of course we only roll out those ones that come with uplifts:

By the end of this year, even though more tests have failed than have succeeded, you have proved some serious value to this small business, and have moved monthly revenue up significantly, taking annual revenue for the year up to over £1.1m (from a £900k starting point):

Is this the full picture, though?

What happens when we add in the impact on organic search performance of these changes we are rolling out, though? Well, let’s look at the same example financials with a couple more lines showing the SEO impact. That first positive CRO test? Negative for search performance:

If you weren’t testing the SEO impact, and only focused on the conversion uplift, you’d have rolled this one out. Carrying on, we see that the next (null) conversion rate test should have been rolled out because it was a win for search performance:

Continuing on through the rest of the year, we see that the actual picture (if we make decisions of whether or not to roll out changes based on the CRO testing) looks like this when we add in all the impacts:

So you remember how we thought we had turned an expected £900k of revenue into over £1.1m? Well, it turns out we've added less than £18k in reality and the revenue chart looks like the red line:

Let’s make some more sensible decisions, considering the SEO impact

Back to the beginning of the year once more, but this time, imagine that we actually tested both the conversion rate and search performance impact and rolled out our tests when they were net winners. This time we see that while a conversion-focused team would have rolled out the first test:

We would not:

Conversely, we would have rolled out the second test because it was a net positive even though the pure CRO view had it neutral / inconclusive:

When we zoom out on that approach to the full year, we see a very different picture to either of the previous views. By rolling out only the changes that are net positive considering their impact on search and conversion rate, we avoid some significant drops in performance, and get the chance to roll out a couple of additional uplifts that would have been missed by conversion rate changes alone:

The upshot being a +45% uplift for the year, ending the year with monthly revenue up 73%, avoiding the false hope of the pure conversion-centric view, and real business impact:

Now of course these are simplified examples, and in the real world we would need to look at impacts per channel and might consider rolling out tests that appeared not to be negative rather than waiting for statistical significance as positive. I asked CRO expert Stephen Pavlovich from conversion.com for his view on this and he said:

Most of the time, we want to see if making a change will improve performance. If we change our product page layout, will the order conversion rate increase? If we show more relevant product recommendations, will the Average Order Value go up?

But it's also possible that we will run an AB test not to improve performance, but instead to minimize risk. Before we launch our website redesign, will it lower the order conversion rate? Before we put our prices up, what will the impact be on sales?

In either case, there may be a desire to deploy the new variation — even if the AB test wasn't significant.

If the business supports the website redesign, it can still be launched even without a significant impact on orders — it may have had significant financial and emotional investment from the business, be a better fit for the brand, or get better traction with partners (even if it doesn't move the needle in on-site conversion rate). Likewise, if the price increase didn't have a positive/negative effect on sales, it can still be launched.

Most importantly, we wouldn’t just throw away a winning SEO test that reduced conversion rate or a winning conversion rate test that negatively impacted search performance. Both of these tests would have come from underlying hypotheses, and by reaching significance, would have taught us something. We would take that knowledge and take it back as input into the next test in order to try to capture the good part without the associated downside.

All of those details, though, don’t change the underlying calculus that this is an important process, and one that I believe we are going to need to do more and more.

The future for effective, accountable SEO

There are two big reasons that I believe that the kind of approach I have outlined above is going to be increasingly important for the future of effective, accountable SEO:

1. We’re going to need to do more testing generally

I talked in a recent Whiteboard Friday about the surprising results we are seeing from testing, and the increasing need to test against the Google black box:

I don’t see this trend reversing any time soon. The more ML there is in the algorithm, and the more non-linear it all becomes, the less effective best practices will be, and the more common it will be to see surprising effects. My colleague Dom Woodman talked about this at our recent SearchLove London conference in his talk A Year of SEO Split Testing Changed How I Thought SEO Worked:

2. User signals are going to grow in importance

The trend towards Google using more and more real and implied user satisfaction and task completion metrics means that conversion-centric tests and hypotheses are going to have an increasing impact on search performance (if you haven’t yet read this fascinating CNBC article that goes behind the scenes on the search quality process at Google, I highly recommend it). Hopefully there will be an additional opportunity in the fact that theoretically the winning tests will sync up more and more — what’s good for users will actually be what’s good for search — but the methodology I’ve outlined above is the only way I can come up with to tell for sure.

I love talking about all of this, so if you have any questions, feel free to drop into the comments.

Back to Top
Will Critchlow

Will Critchlow is CEO of SearchPilot, a company that spun out of his previous business Distilled, which was acquired by Brainlabs in early 2020. SearchPilot is an enterprise SEO A/B testing platform that proves the value of SEO for the world’s biggest websites by empowering them to make agile changes and test their impact.

With Moz Pro, you have the tools you need to get SEO right — all in one place.

Read Next

21 Black Friday Marketing Strategies to Maximize Your Profits

21 Black Friday Marketing Strategies to Maximize Your Profits

Nov 07, 2024
How to Ensure Your Organic Traffic Actually Drives Revenue — Whiteboard Friday

How to Ensure Your Organic Traffic Actually Drives Revenue — Whiteboard Friday

Feb 09, 2024
CRO Misconceptions Every SEO Should Know — Whiteboard Friday

CRO Misconceptions Every SEO Should Know — Whiteboard Friday

Jan 12, 2024

Comments

Please keep your comments TAGFEE by following the community etiquette

Comments are closed. Got a burning question? Head to our Q&A section to start a new conversation.