Skip to content
Advanced seo 74da3a3

The Importance of Strategic Focus in an Evolving Algorithmic Environment

Lee Allen

This YouMoz entry was submitted by one of our community members. The author’s views are entirely their own (excluding an unlikely case of hypnosis) and may not reflect the views of Moz.

Table of Contents

Lee Allen

The Importance of Strategic Focus in an Evolving Algorithmic Environment

This YouMoz entry was submitted by one of our community members. The author’s views are entirely their own (excluding an unlikely case of hypnosis) and may not reflect the views of Moz.

A quick-ish trip down memory lane

Remember when the life of an SEO was so much easier? Even a little bit of knowledge could wield extremely powerful results. The most influential signals were public knowledge and, in order to rank, you simply needed to understand what the top ranking domains were doing and calculate the number you required to be on an even keel. 

It was so simple because of the limited number of factors being measured, those factors being easy to manipulate and, short of doing something totally stupid, you had minimal risk of triggering a penalty. In the unlikely event that a penalty occurred, it was typically much simpler to reverse any issues.

Google-Ranking-Factors.jpg

Figure 1 - A top-level representation of how Google’s algorithmic focus has shifted

After layers of enhancements and the introduction of ranking signals outside of technical, content and links, they've gradually begun to blur the lines. The most recent shifts have left a lot of brands taking a scattergun approach or, in more cases than you’d think, simply sticking with the tactics they relied upon in previous years. Unfortunately, it is unlikely either "strategy" is going to gain them much traction and could, in fact, result in some form of penalty.

How tangible are your KPIs?

A statement I constantly hear thrown around casually is “SEO is now about doing the right things.” This is an extremely open and subjective statement influenced by Google's propaganda machine and happily relayed by naive marketing agencies. 

Ask yourself, in this rapidly evolving and penalty-prone algorithmic environment, do you feel your SEO strategy has both tangible and safe KPIs, targeted to the areas which offer the greatest gains? If so, are you confident in what you’re basing these decisions on? If you’re controlling a digital marketing function, with the pressure of defining and executing your brand’s SEO strategy, spending your budget on “the right things” isn’t a strong enough justification.

My questions are: Has Google’s algorithm really evolved to a point where it is too complex to understand which signals are important? How much of each do you need? And, primarily in today's environment, what are the suitable tolerance levels to avoid penalties?

Using terabytes of data gathered by Stickyeyes' proprietary algorithmic signal trending engine, Roadmap—which helps us identify strong correlations, benchmark performance against competitors or an entire industry and monitor algorithmic changes—we’re hoping to provide some clarity.

Do 'important' ranking signals still exist?

In the last decade, there has been a greater emergence of correlation studies. The studies have earned a tepid response from the SEO community, mainly due to the ambiguity surrounding cause and effect. I’m a believer that, if used correctly, this data can provide extremely valuable insights.

Correlation data can be used to prove the strength of relationship between a specific signal and ranking higher in Google. For instance, in the case of another Stickyeyes Moz article, there is no denying a strong relationship exists between ranking higher in Google and having a greater amount of time spent on your site.

Mean-values-for-Average-Time-Spent-on-Si

Figure 2 - Evidencing the strong correlation between Average Time Spent On Site (seconds) and Google Ranking Position

This is not an isolated case. Over the past six months, we’ve seen the number of signals with a "strong" correlation (coefficient over +.7/-.7) gradually increase. In layman’s terms, a heck of a lot of signals had a relatively strong correlation with ranking higher, albeit some far stronger than others.

Percentage-ranking-signals-with-a-strong

Figure 3 - The number of signals correlating "strongly" has gradually increased over the past six months

This indicates Google’s algorithm is increasing its consideration of existing signals, as well as potentially introducing new signals, maybe with the aim of forming an equilibrium across all indicators and further limiting the manipulability of the overall algorithm.

We could draw the conclusion that having higher or lower levels of these strong correlating signals potentially influences ranking movement, but the likeliness of any signal heavily influencing ranking change in isolation—much like manipulating "exact match anchor text" back in the day—would be an oversight the architects are extremely unlikely to repeat. The more likely outcome would be attributing influence based on the outcome of many signals, rather than on an independent basis.

But, does this mean weaker signals are irrelevant?

Assuming we can identify the strongest correlating signals, is it fair to assume that weak and uncorrelated signals are of little or no importance? Well, based on the following data, I’d say overlooking these could result in an underachieving strategy.

Let’s take a really basic signal, so trivial we could debate if it’s even on Google’s radar: the number of H1 tags on the ranking URL. Over the past six months it has always had a relatively low correlation coefficient and a sporadic trend.

H1-tags-in-Page-HTML.png

Figure 4 - The number of H1 tags has always had a relatively sporadic and low correlation coefficient (under 27)

Digging into the data shows this is because the vast majority of URLs, regardless of how high or low they rank, have a single H1 tag, including at least 74.9% of the sites ranking in first place and 75.8% across the top three positions. Would it really be a wise decision to overlook such basic signals when the majority of sites, especially high ranking ones, don’t?

H1-tags-on-the-URL.jpg

Figure 5 - Our data set shows the majority of rankings URLs have exactly one H1 tag, regardless of ranking position

Furthermore, the most consistently uncorrelated signal we’ve measured is the number of backlinks with a Moz Page Authority greater than 90 (pointing to the individual ranking URL). This might come as a surprise to most, especially with all the current SEO chatter focusing on higher quality links. Ultimately, because such backlinks are a rarity, only 0.3% of ranking URLs have one or more, resulting in a zero correlation.

Page_Authority_90.png

Figure 6 - The sites which obtain backlinks from pages with a Page Authority > 90 are typically reputable domains

Again, would you really turn down a backlink of this quality? I certainly don’t see one hampering a website’s performance. Some may even argue that it adds a protective layer by establishing trust. The lack of correlation simply highlights that sites don’t necessarily require a vast quantity, if any, of the links in order to rank highly.

Should we be looking at specific types of signals?

If our data indicates Google is gearing their algorithm towards measuring a greater number of indicators, is there a possibility it still favors a specific set of signals, much like in the old days, making our strategic focus narrower and more manageable?

By categorizing our broad range of algorithmic signals, taking the average correlation coefficient for each signal category over the past six months, we can gain insight into how strongly, on average, each group of signals correlates with higher rankings.

Overview-month-on-month.png

Figure 7 - An overview of the month-on-month average correlation coefficient, split by signal category

Notably, the "other" signals (i.e., Traffic, Page Load Speeds, etc.) are currently the most dominant, in terms of correlation. We have also seen a gradual decline in correlation of "Links & Link Quality" signals, which has gone from being the strongest correlating to the third-strongest over the past 11 months. At the same time, "Content and Engagement'’ signals have made a gradual climb, becoming the second-strongest group of signals for the last three months.

The high volatility and relatively narrow range between signals could be a sign that Google is attempting to maintain a more equal distribution across all types of signals. Additionally, the gradual shifts reflect Google’s modern update approach—iterative updates to existing algorithmic components (i.e., new versions of Panda and Penguin) rather than a massive refocusing of the entire algorithm (i.e., Vince).

What is the data actually telling us?

There will always be signals that have a stronger relationship with sites that rank higher and potentially have a positive influence on rankings. In my opinion, without definitive proof of cause and effect, staying on top of all strongly correlating signals is worth your constant attention.

That being said, our analysis also highlights that you shouldn’t ignore any potential signal, no matter how small. I like to think of these as prerequisites you need to satisfy to justify your right to rank.

Additionally, the growth of signals correlating strongly indicates Google is considering more signals, and it is unlikely that isolated attention will have a meaningful impact.

Defining your strategic focus

The key to any strategy is having tangible goals to aim for. Otherwise, what are you basing your strategic decisions on? Over the years, I’ve seen businesses set strategies on the basis that someone else is doing it, so they should do it too. We’re always hearing the aforementioned conjecture about doing “the right things,” the things that “Google wants to see.” Unfortunately, Google isn’t measuring right and wrong in the purest sense; the company is quantifying right and wrong based on cold hard statistics. Being mathematically minded, I prefer to measure and quantify everything using data and reduce the level of gut feeling.

With so many variables in play, what brands actually require is strong, strategic steering, some form of benchmarks, and KPIs to safely dictate the direction of their overall digital marketing strategy while ensuring SEO campaigns reap maximum benefits.

Based on what we've learned, we’re pretty confident the overall strategic focus needs to be broad, covering a wide range of signals. Ultimately, this means there is a lot of data crunching required. But at the end of the day, this is the same signal data Google is gathering, albeit from their own data set over publicly available sources and in far vaster quantities.

With that in mind, we’re going to outline how data can be used to set extremely tangible, yet safe benchmarks. I don’t believe for a minute that we’ll ever identify a number of precise values that magically pop a site into a high ranking position. Those days are long gone. But, how much of each signal do you need to rank, to be ultra-competitive, to have a shot at that No. 1 position for your most lucrative terms?

First, don’t be an anomaly!

With so many competitors in play, how do you know that the ones you’re battling it out with are actually doing it in a sustainable manner? Well, ultimately, you don’t. If the top players are sitting on the tipping point of penalization, following them could quickly lead you to the same fate. Nobody should follow an individual competitor’s strategy; it's a high-risk and antiquated approach at best. This is why we use a data set covering a wide spread of keywords, domains and positions, ensuring we’re looking at the bigger picture and not an isolated battle for a single keyword.

The first challenge is understanding the algorithm’s range. By reviewing the range of data (the minimum and maximum values per position), in contrasting positions (No. 15 vs. No. 1), for a handful of potential ranking signals, we can clearly see these ranges are huge. (They are extensive that many sites will sufficiently sit within the No. 1 position range.) So why don’t we all rank No. 1?

Range-of-valus-15-1.jpg

Figure 8 - The logarithmic range of values for ranking position No. 15 vs. No. 1 across a wider array of signals. Note: Alexa Rank is used as an indicative measure of a site’s overall traffic levels.

Understanding outliers

Ultimately, these huge ranges are skewed by what statisticians refer to as data outliers. Statistically, these are data points that are distanced from the majority of data within their respective data set but can be misleading. Using normal distribution we can identify where the majority of data sits within each range and how much each data point, per ranking position, deviates from the average in order to detect these outliers. 

Data-Distribution-per-Signal.jpg

Figure 9 - The majority of data, per signal, falls within one standard deviation of the mean

Typically, within normally distributed data sets, at least 95% of the data falls within two deviations, and 68% falls within one deviation of the mean point, while the handful of data sitting outside -2/+2 deviations are most likely anomalies.

Overlaying deviation

By overlaying deviation data on our original range chart, we can get a better idea of what values, for a given signal and position, could be classified as abnormal. In this case, we’re going to assume anything greater than one deviation is potentially risky or an insufficient level.

Outliers-vs.-The-Majority-Per-Signal.jpg

Figure 10 - Logarithmically comparing the range of values for positions No. 1 vs. No. 15 and highlighting value deviation

Trimming these anomalies, typically a handful of excessively large or small values in comparison to their whole data set results in a refined target range, which in turn reduces each signal's ideal positional range. This ensures that you're sticking with the majority. 

A friendly warning – practice common sense

These outputs are based purely on the values we’ve captured from ranking websites, so a degree of common sense is required when interpreting the data. For instance, it is extremely unlikely your performance will be impacted by having too much authority or a really low bounce rate, while it is likely that Google may consider the respective opposites.

That being said, let's consider a signal like "Average Time Spent On Site." While it would seem obvious that more is better, Google likely has a strong understanding that, if it is taking a user a long time to consume a piece of content, the page’s readability is too complex or the key points are difficult to discern.

A popular theory, and one I’d agree with, is Google has different tolerance levels per signal and per positional bracket. My view is certain signals carry minimal to no risk, likely having an impact on a website’s performance, but not reaching the threshold of risking a penalty, while others will carry higher risk and more serious repercussions.

If a site accumulates a sizable risk score, its performance is dampened or, in more severe cases, a penalty may be applied. This is potentially why certain sites with many abnormal values still rank exceptionally well (e.g., Wikipedia’s link signals are off the scale), while other sites with far less anomalies (i.e., a high level of duplicate content or keyword focused anchor text) may rank worse and/or suffer a penalty.

This analysis highlights the need to outline a set of boundaries, allowing optimization to a competitive but not overtly excessive degree, giving you an increased chance of ranking well. If you’re already benchmarking outside of what we’d class as a signal tolerance range, especially across a broad range of signals, then you’re probably risking penalization and should try to reign it in. At the end of the day, if the majority of sites ranking in the No. 1 position have values within this range, then you can be relatively confident you’re playing it safe.

However, it’s not so bad to be common

While the above analysis presents some interesting insights, the broadness of the target ranges feels a tad unspecific and, for me, leaves unanswered questions.

Recall my earlier example of H1 tags on a page. The majority of sites ranking in first position (74.9%) have only one of these. Granted, this value conforms to our anomaly analysis, which recommends a value between one and four. However, based on the high percentage of sites with a single H1 tag, I’d say we can be more confident that one is nearer to the “sweet spot” than four.

With this in mind, let's examine if there is a more specific value, per signal, that has a higher association with sites ranking in first position. Let’s segment the original data set, excluding the previously identified outliers (<-1 and >+1 deviation), to only include the majority percentage of most common values.

Identifying the most common values

For example, 10.72% of all the "Average Time On Site" values, for the No. 1 position, are 264 seconds, while 5.09% are 222 seconds. In this case, these two most common values account for 15.81% of that specific data set. Since we’re looking at the majority percentage, we continue including values, per ranking signal and position, until the total percentage is greater than 50%. In this case, as previously noted, "H1 Tags On The Page" has only one value included (1), as that value alone accounts for more than 50% of its respective data set.

Average_Time_on_Site_Table.png

Figure 11 - Using this refined data set gives a very different perspective and appears to reduce those extremely broad benchmark ranges

Most-Common-Value-Range-Per-Signal-Exclu

Figure 12 - Logarithmically comparing the range of values using only the majority percentage of most common values

This is more of what I’d expect to see, with sites in the No. 1 position typically having a far less flexible range. However, this data can still be open to various interpretations, and an element of common sense is always required.

But what does all this actually mean?

Using our control case, the "H1 Tags On The Page" examples seems to suggest that a single tag should be used, regardless of ranking position, which stacks up with the earlier insights and what we usually hear about from a best practice site structure perspective.

We can also clearly see that those ranking in the No. 1 position typically retain the user on the site for a longer period of time than those ranking lower.

The minimum levels of multi-channel traffic driven to the site now has a far higher requirement, as indicated by measuring Alexa Rank, as a lower rank indicates higher overall traffic levels. While this does highlight a minimum threshold (6), having ridiculously high levels of traffic shouldn’t be damaging. At the point this data set was pulled, the top six sites ranked by Alexa—Google.com, Facebook.com, YouTube.com, Yahoo.com, Baidu.com and Qq.com—likely didn’t rank in Google UK across the terms we measured.

Additionally, our data shows it is uncommon that you’ll find a site ranking in the No. 1 position with a bounce rate less than 27%, but it doesn’t mean that you need to be above this value. This, again, is steered by the fact that the majority of sites will struggle to get this rate any lower with the high levels of traffic they are receiving. I’d argue that there is at least a minimum or maximum threshold per signal, but not necessarily both.

Alexa_Rank_Table.png

Figure 13 - In most cases, I’d estimate only one side of the range would hinder performance. However, in instances where Google is overtly keen to prevent manipulation, you could argue that both too much and too little would have a potential performance impact.

Summarized takeaways

Overall, I appreciate that there is a lot of analysis and data to digest. For those less interested in the science behind the conclusions, here are a number of key takeaways:

  • No potential ranking signal should be ignored. Try to perfect everything no matter how small, across the board, and take note of small differences.
  • Keep an eye on those “strong” correlating signals. You can potentially use these as a priority area indicator rather than as your sole strategic focus.
  • Don’t unknowingly follow a single competitor’s strategy. Instead, look at a broad set of competitors' actions across your entire keyword set.
  • Don’t be an anomaly. Aim to stay within the tolerance levels of the majority of sites for each signal and ranking position.
  • Ensure any deployment is reinforced by data. Define clear targets and delivery KPIs from the outset to avoid hedging your marketing spend on theory and conjecture.

Sharing our replication tips

If you’re thinking of performing a similar analysis, here are a few pointers:

  • You need a large set of data. A handful of sites and keywords doesn’t cut it. When you start refining your data set, you need to leave yourself with a sample that is still statistically valid.
  • There are lots of ranking signals. You’ll definitely have a head start if you use these examples as your guide. But as you can imagine, these alone are far from satisfactory. As referenced, there are hundreds of ranking signals, some far more important than these.
  • Industries differ. This analysis is based on terms across a combination of industries. Based on previous analysis, we know there can be huge differences in benchmarks when analyzing a specific industry’s data.
  • Google moves the goalposts. Because Google constantly refines its results, we update our entire data set at least once a month. You need to stay on top of changes and continuously benchmark.
  • Beware of how fast you grow. Using our month-on-month data, we have the ability to determine natural growth rates. Simply taking a snapshot of the data at a single point in time and then rapidly bridging the gaps is likely to land you in trouble.
  • Benchmarks need to be comparable. Nobody has access to the entire web’s analytics accounts. Therefore, our engagement statistics are taken from publicly available sources, and any comparisons to this should be, too. Undoubtedly, your analytics package will have a more accurate representation of your own stats. But to compare yourself against others, you need to compare apples with apples.

If you'd like to keep up to date with our Roadmap data analysis and other intelligence, visit Stickyeyes Insight Hub. If you’re undertaking similar data analysis we’d love to hear your findings in the comments below.

Back to Top
Lee Allen
Lee Allen is Technical Planning Director (formerly Search Director) at international digital marketing agency Stickyeyes. With responsibility for the agency’s product and software development arm, Lee’s core remit involves planning and developing cutting-edge and proprietary technology to satisfy both client and internal needs.

Make smarter decisions with Moz API

Start building and scale easily with affordable plans

Read Next

How I Found Internal Linking Opportunities With Vector Embeddings

How I Found Internal Linking Opportunities With Vector Embeddings

Oct 03, 2024
Build a Search Intent Dashboard to Unlock Better Opportunities

Build a Search Intent Dashboard to Unlock Better Opportunities

Aug 29, 2024
How to Optimize for Google's Featured Snippets [Updated for 2024]

How to Optimize for Google's Featured Snippets [Updated for 2024]

Aug 20, 2024

Comments

Please keep your comments TAGFEE by following the community etiquette

Comments are closed. Got a burning question? Head to our Q&A section to start a new conversation.