Verve Search logo

SEO Metric Ranking Correlation Study

James Finlayson

James Finlayson

May 22, 2018

When we first started link building we’d talk about raw number of links. Most would like to think that those days are gone and that, since Penguin, quality matters more than quantity. As a result, many have moved to talking about number of links above a certain metric – typically Domain Authority or TrustFlow. Yet this is only useful, beyond a raw measure of amount of links, if you believe that the metric tells you something meaningful and comparative about those links.

Almost every day we get asked how many links a campaign has generated and when we enter campaigns in for awards we know the raw number of links is key. The lack of adoption of these link metrics as the true arbiter of a links value is, we think, evidence that they might not be very indicative of quality at all.

The Problem with Most Metrics

If you’re a tool provider, chances are you have your own metric; Moz has DomainAuthority, Majestic has TrustFlow, AHrefs has Domain Rating, etc. Yet, as a tools provider, you have significant limitations including:

  1. You’re limited to your own database(s) as incorporating a competitors implies yours isn’t up to the task.
  2. Your databases are mostly link databases and so you’ll need to heavily weight how you measure quality to take into account links.
  3. Your audience is, usually, geographically centred around a handful of countries and, so, you’re incentivised to use those countries data for testing and so make it as true as possible in those countries. To be fair here, for anyone that’s played with Google outside of an English-language-speaking country you’ll know that this bias isn’t just limited to tool providers.

In a world where we know Google takes over 200 different factors in assessing rankings, and tool providers typically take into account less than a handful, we don’t think it should be surprising that the metrics don’t correlate well with rankings. So, we decided to find out.

Earlier this year SEOMonitor was kind enough to send us through over 450k SERPs, containing 4.5 million ranking positions worth of data. This data was UK rankings for commercial terms with at least 1k searches a month in the UK. We only gave them a vague understanding of what we’d use the data for so that no funny business could occur (once again, thanks SEOMonitor team for putting up with our seemingly random requests).

For the purpose of this first study we randomly selected 1k of those 450k SERPs and, for each page ranking, we collected:

  1. Majestic Domain-Level TrustFlow
  2. Majestic Domain-Level CitationFlow
  3. Majestic Page-Level TrustFlow
  4. Majestic Page-Level CitationFlow
  5. Moz Page Authority
  6. Moz Domain Authority
  7. Moz MozRank
  8. Moz Subdomain-level MozRank
  9. AHrefs Domain Rating

We then combined and averaged each metric for each ranking position to produce the graph below:

seo-metrics-ranking-correlation

If we’re honest, we were surprised with quite how well each correlated. There is a clear pattern of sites ranking better receiving higher scores from each of the metrics – with MozRank (Subdomain) and CitationFlow just coming out top. Here are the correlation scores:

seo-metrics-correlations

Yet this is a pretty easy test – we’d likely get the same results if we looked at average:

  1. readership
  2. number of URLs
  3. direct traffic
  4. any number of factors that clearly aren’t ranking factors and naturally increase as a sites importance increases

That doesn’t mean that any of these are a good indication of what order sites might rank in and, as a result, their ability to predict future ranking success. As a result, we asked a harder question – what percentage of ranking positions could each metric accurately predict? The results, it turned out, were not encouraging:

percentage-of-time-seo-metric-predict-ranking-order

We found that:

  1. The majority of metrics struggled to predict more than 15% of the ranking positions. To put it another way, if you looked at a random SERP the majority of the time each individual metric was unlikely to guess the right order for more than one of the ten results.
  2. What’s not shown in the data is that, when they did, it was typically predicting the number one position where the number one position was so dominant that it was incredibly obvious it deserved to be first.
  3. Surprisingly, given their comparatively small index, Moz’ Page Authority correlated the most with rankings, whilst MozRank (despite it’s name) correlated the worst.

Yet there’s something weird going on here – MozRank wins the test and comes dead last in another? The answer, it turns out, is what happens when MozRank gets it wrong. Imagine a site is ranking position 1, in this example, TrustFlow predicts it should rank in position 2, whilst MozRank predicts it should rank position 10. They’re both wrong and so if you’re judging purely on how many results it gets right or wrong the two metrics are equal, but it’s important that, when a site gets it wrong it gets it at least wrong as possible. It turns out that when MozRank gets it wrong, it gets it way more wrong than most other metrics:

how-far-seo-metrics-from-perfect

So the trite answer might be ‘use and asses a range of metrics’. <sigh> Or, if we’re being cynical – ‘all SEO metrics are equally bad at assessing a link’s potential value, they’re just bad in different ways‘.

This is inevitable and, we think, only going to get worse given the increasing complexity of what ranks where and why.

What Makes a Good Link

Beyond the number of links and authority that a site has there are a few things that, as humans, we naturally take into account including:

  1. Where on the page the link is – is it in content or in the sidebar or footer.
  2. The language of the linking page to the page it’s linking to – why is an Estonian site linking to your English product page, aimed at people in the UK, and why should I trust their opinion on buying products in the UK?
  3. The relevance of the linking page to the page it’s linking to – everyone’d much prefer a link in an article all about what I do that in one talking about something completely unrelated.
  4. How Google’s treating the site you’re getting a link from – if you know that a site’s just got a penalty, would you want a link from it?

Each of these, are obviously important, from a human perspective, but not taken into account, at all, by the tools providers metrics. That’s why the metrics can’t work out the ranking well.

What can you do about it?

Be smart on what you’re reporting on. You might want to consider reporting on:

  1. Social shares
  2. Coverage views
  3. Rankings and (YoY) Traffic/Revenue changes

We got so frustrated with this that, years ago, we built LinkScore. Taking into account more than a dozen off and on-page metrics it provides a score out of 500 for likely a link is to improve the ranking of the linked page in the target country. This is also handy in terms of how you setup your outreach team for success. If all you want from them is high DA sites don’t be surprised to get comment spam and other ‘tactics’ to achieve those KPIs. Their KPIs need to follow your KPIs as an agency, which need to support your clients KPIs.

Ultimately, we’re not saying don’t use tool provider metrics – they’re a quick and dirty indication – but be aware of their limitations and plan towards the things that actually make a difference to your clients.

Read another blog post