Verve Search logo

outREACH Workshop Video 3 – Measuring the Quality of Links

This is the third and final video from our free outREACH workshop. This was a series of workshops teaching actionable tips and techniques that will enhance your creative content and link-building strategy.

In this third part of the series, James Finlayson, Head of Strategy, discusses different methods for measuring the quality of your outreach efforts. He also talks about history of updates and how this has impacted the way we look at links.
 

If you haven’t already seen the first in the series, feel free to follow this link and check out Lisa Myers, the CEO and Founder of Verve Search, as she goes through the concept and ideation process of creative campaigns. You can also see Alex Cassidy’s presentation here to learn more about our team’s outreach strategy. 

Join us for our next event. On 12th June we are hosting outREACH Online Conference which is a fantastic opportunity for you, or members of your team to hear from the best SEO’s, link-developers, content creators and marketers in the industry including marketing wizard Rand FishkinShannon McGuirk (Aira), Carrie Rose (Rise at Seven), our very own Lisa Myers and many many more. We hope that you’ll be able to join us for this event.

All Speakers

If you have any questions about this content or outREACH Online please contact us at info@vervesearch.com.

Campaign Spotlight – Crep Check

Starting this month, we will be sharing with you an example of a campaign which has delivered amazing results for a client over the course of the previous month.

This month, we produced Crep Check for Farfetch, a luxury fashion retailer. The campaign looks at the most valuable trainers currently on the market, as well as those that have seen their value skyrocket since their initial release.

We teamed up with Stadium Goods to provide a definitive breakdown of the most valuable and appreciative trainers in modern times, and an explanation as to why so many of these shoes have become worth such vast sums of money.

Costing £22,763, The high-top ‘Jasper’ sneaker from the Kanye West x Louis Vuitton line are the most valuable on the market. Released in 2009, the shoes designed by West himself have increased by more than 2500%. They were initially available in three colourways but the pink and grey version is the most coveted.

We found that Nike’s What the Dunk trainers had the biggest value increase. Released in 2007 at £91, they were designed as a patchwork of previous patterns, colours and materials used in old Nike SB models. Today, they are worth £3,793, an increase of over 4000%.

We took inspiration for the campaign from high-profile trainer auctions and one-off celebrity releases, which have been sold for tens of thousands of pounds. The international news interest these stories generate inspired us to create a definitive list using data provided by one of Farfetch’s key brands.

We also liked the idea of showcasing trainers as alternative investments, a concept we previously explored with coins and toy cars. Lifestyle, fashion, and money journalists love to follow the most recent collectable trends and valuations in fashion.

Our designs took inspiration from ‘sneaker walls’, similar to those found in Stadium Goods’ stores in New York and Los Angeles. We also used price tags to show the rank number, making the campaign feel more like a high-end fashion index.  We then researched each shoe using Stadium Goods’ index to provide some context to their value increase.

‘Crep Check’ launched on the 26th June, and to date, our outreach team have delivered 98 links totalling 3367 LinkScore (Verve’s own tool using a combination of metrics to measure the value of links). We were also able to build links in seven different countries.

Our outreach coincided with an auction of the world’s rarest trainers in New York set up by Sotheby’s and Stadium Goods. This included one of the first-ever shoes made by Nike in 1972, which sold for £350,000. The interest in this auction meant journalists were receptive to a report about the subject to inform their story.

It also coincided with a PR storm involving Nike removing a limited-edition shoe from retail following complaints about the use of the ‘Betsy Ross’ flag, which has since been associated with white supremacy after its use as the original US flag.

Both stories helped us to get linked coverage for the client in Business Insider, CNBC, GQ, Houston Chronicle, Yahoo Finance, AD (Netherlands’ second-largest newspaper) and CNN’s Style section. In addition, the BBC’s Newsround team wrote an article about the world’s most valuable trainers using the campaign data.

How to write an outreach email that gets links

Subject line

The subject line is the most important part of the outreach email. With some journalists at national publications writing eight to ten articles per day and receiving anywhere upwards of 100 emails a day, catching their eye with a newsworthy and noticeable subject line is essential. The idea is to make it as close to a useable headline for journalists. Here is an example from one of our more successful campaigns – ‘On Location‘:

“Look Familiar? These are the most-filmed locations in California”

The headline is short, punchy and is likely to entice the journalist to open the email to find the information they are looking for. We often vary our approach depending on the target publication. For example. using simple language and popular tabloid phrases such as ‘revealed’ often helps our open-rates with tabloid journalists. The subject line below helped us get national coverage from the Sun and Daily Express for a recent campaign:

“(Data) Revealed: London has UK’s most affordable fuel prices”

It is also important to make it clear what you are offering to a journalist from the outset. If you are pitching content to be used as a listicle, make that clear in the subject line. The sign of a great subject line for our team is when the journalist uses it verbatim as the headline for their article.

Opening lines

It is important to get to the point of your email as quickly as possible, setting out the basis of the article in the first few lines. The opening lines should include the most newsworthy aspect of the story, as well as a concise explanation of the information you are pitching. Below is an example of the opening of the email for ‘On Location’ I highlighted above:

In three lines, this email clearly sets out what the journalist needs to know about our research. We compiled IMDb film location data to find the most popular locations for movie scenes around the world, and that Venice Beach is the most-filmed location in California. The most important points are listed in bold to help the journalist spot the story quickly.

Our team usually take three to five lines in an email to give the journalist the essence of the story in the campaign. If this appeals to them, they are more likely to read on.

The link

Our outreach team prefer to be direct when asking for a link to a campaign. We make it clear that we are offering something of value to the journalist, and they should link to our asset for the story to make sense.

We use the line, “Please credit via a link to our campaign page”, as it makes our request clear without it being too forceful. It also saves us vast amounts of time having to reclaim links for articles where journalists have used our story and/or assets.

Main body

Now is the time to introduce the substance of your research, and to provide the journalist with all the information they need to write the article. There is no set rule to how much content to include, but you should include any relevant data that you think would be important in an article.

Writing clear headings with colour-coding can also be an efficient way of helping a journalist sift through the body of your email to find relevant information. Here is an example of a recent email we sent out for a campaign around AI Jobs:

 Top 5 outreach tips

  1. Write the subject line to match your target publication, using a tone of voice similar to that of the journalists at that news outlet.
  2. Keep the subject line simple and state the most newsworthy aspects of your pitch.
  3. Get to the point of the story as quickly as possible in the body of your email.
  4. Break-up the content of the email so you are not overloading the journalist with information and blocks of text.
  5. Sign-post headings and sub-headings within the email so a journalist can clearly see what you are offering.

My First Month in The SEO World

May has been an interesting month in my life. After a few months of job hunting, I was given the cool opportunity to become part of the Verve SEO team. I joined Verve Search at the beginning of May and officially became an SEO newbie.

Before I started here, my only encounter with SEO was a 4-month internship I did with a digital marketing agency, where I was mainly writing content and slowly getting introduced to SEO. For that reason, this opportunity at Verve Search has been a real, eye-opening, and captivating experience. I had the chance of meeting amazing people that made the beginning of my career so much easier and smooth. I feel lucky to have such a cool mentor to guide me through this new path, and to be surrounded by this refreshing work environment.

In this article, I will summarise the main things I learned during my first month as an SEO Executive.

Lesson 1: Wasn’t SEO just spamming links around?

Before I started working on SEO, I had this idea that SEO basically meant throwing links around every blog, directory, or web page you could find, and by doing this, you would magically rank better. Well, that is not even close to being true!

The truth is, one of the first things I learned here was that backlink quality is essential to improve your rankings. Ideally, you should focus at ensuring your external links come from relevant and high authority domains.

A high quality backlink comes from a site that will make search engines trust your website to be one of the best results they can give to the searcher to answer their query.

Lesson 2: White-Hat SEO is the way to go.

In SEO (like in most things in life), there is a proper way to do things, and a sketchy way to do things. White-Hat SEO refers to the cleanest and most honest way to improve a website’s visibility.

Why is this the better way? Well, because when you break the rules Google does not like it and tends to penalise your site, which can be very harmful, and could take you a long time to recover from.

White-Hat means following Google guidelines, to provide the searcher, with the best results. On the other hand, Black-Hat SEO, can be anything that you think might increase your rankings but does not follow Google guidelines. For example, stuffing your content with similar keywords, hoping to rank better in the SERP! Google is too smart for this mate! It probably won’t work!

Lesson 3: SEO is a world of constant change.

Google is constantly changing and developing new algorithms, and better ways to provide the best search engine service. They come up with algorithm updates and major changes all the time (and for some reason they name most of them after cute animals). These sudden updates call for SEOs to be in constant alert for news and tools that could harm or help our websites. We need to be ready to inform and act right away, and the best way to do this is to keep track of all the latest SEO updates and news and to carefully monitor your client’s organic performance.

Lesson 4: SEO and content go together!

Content Marketing plays a huge role in SEO. Content marketing and SEO are partners in crime, they go together like avocados and toast! I see it as simple as this: there is simply no SEO without content, SEO needs content, and content needs SEO.

In my case, content was pretty important since day one. My first main task when I started, was to carry out extensive keyword research. This helped me understand how important the on-site content of a website is, and how much effort actually goes into it.

Lesson 5: Practice, practice, and practice.

These past few weeks I realised how important it is to learn by doing in this industry. There are so many tools to use for different purposes and analysis, and the best way to get your head around it is to jump straight into using them.

At the very beginning, the most frustrating part for me was trying to master every step I was doing, without really having a full understanding of the theory behind it. Nonetheless, this is something that came with time, and it was definitely better that way. It was very satisfactory to slowly learn what the data meant, while at the same time getting to experiment with the tools and materials available to me.

SEO is something you want to learn less by reading, and more by getting your hands into it. Of course, having the proper reading material at hand is always helpful, and learning with the SEO theory from the start is just fine too, but in my experience, it is definitely faster and easier to put the theories to work, play around with the tools at your disposition, and experiment as much as you can.

And that’s a wrap!

4 Things You Never Knew About Sitemap Submissions

Whenever a sitemap changes it’s important to notify Google and Bing of the change by pinging <searchengine_URL>ping?sitemap=sitemap_url . Whilst these URLs are meant for bots, they do return an actual html page. When you look at Google’s responses though, you’ll notice four interesting facts:

1. Google is Tracking Views of the Page

It’s fair to say that 99.9% of page loads for the URLs are by automated systems that do not run javascript. It’s interesting, and surprising, then that Google includes the old GA script within the code:

For reference, that UA-code appears to be a property within the same account as Google’s Search Console, but not part of the actual Google Search Console property (UA-1800-36).

2. Google.com still refers to Webmaster Tools

If you load up google.com/ping?sitemap=example.com you’ll find that the page’s title is:

Yet, if you load up any other English-language TLD for the same page (e.g. google.co.uk/ping?sitemap=example.com ) you’ll see this:

3. Google shows a different response for different languages

If you load up non-English Google TLDs you start to see that Google’s taken the time to translate the text into the primary language that TLD targets. For example, here’s the response on google.fr:

and here’s the response on google.de:

Each language gets its own translation of the text…. except for google.es:

I guess the google.es sitemap country-manager was out the day they wrote the translations! In any case, it’s surprising that they bothered to create all these translations for a page that, I would imagine, is very rarely seen by a human.

4. Google makes the weirdest grammar change

If you load up the the .co.uk, .ie, .co.za or any ‘international’ English version of Google’s sitemap ping URL you’ll find this message:

(we’ve added the highlighting)

If, however, you load up the .com you receive this:

The ‘that’ in the second sentence disappears.

Why would Google do any of these things? Maybe it just doesn’t care about updating these. Maybe all of the international English-language versions share a single ‘international English’ text and, when someone last updated it, they forgot to update the .com version. Here’s the more interesting question, though. If the ping URL’s frontend is different for each Google TLD, then, does that mean the backend could be different – maybe feeding into their different indexes? Does which Google you ping make any difference? Should you be pinging your ‘local’ Google rather than just the .com?

We pinged our test site about 40 times from various TLDs to see, through our log files, if Google was visiting from different IP addresses when you pinged from a different TLD. It wasn’t. Next, we reached out to John Mueller to see what he had to say:

…and now you know.

What’s The Limit On Google’s Live Inspection Tool?

Last year Google launched the beta of the new Google Search Console, but when it first launched it was pretty empty. By June they had added one of the features I now use most often in it, the URL Inspection tool. It allows you to quickly see details as to how Google’s crawling, indexing and serving a page, request that Google pulls a ‘live’ version of the page and request that it’s (re)indexed:

The Live Inspection Tool will soon replace the ‘fetch as Google’ functionality found in the old Search Console and so it’s worth considering how moving to the new version might limit us.

The old fetch and render used to be limited to 10 fetches a month – and had a clear label on it allowing the user to know exactly how many fetches they had remaining. This label disappeared in February last year, but the actual limit remained:

Since the Live Inspection Tool is far more about understanding and fixing problems with a page than the old ‘fetch as Google’ tool – which I, at least, mostly used to force a page to be indexed/re-crawled – it makes sense for the Live Inspection Tool to have a higher limit. Yet there’s no limit listed within the new tool. We turned to Google’s documentation and, honestly it could be more helpful:

So, dear readers, we decided to put the Live Inspection Tool to the test with a methodology that can only describe as ‘basic’.

Methodology: We repeatedly mashed this button:

..until Google stopped showing us this:

We quickly sailed past 10 attempts without a problem, on to 20, then 30. At 40 we wondered if there really was no limit, but, just as we were about to lose hope, on the 50th try:

 tl:dr: The daily limit for Live URL Inspection is 50 requests.

How is knowing this actually useful?

Basic

If you’re planning a domain migration, you can add in to your migration plan a step to pick out your 50 most important URLs and manually request indexing on those pages on the day of the migration.

Intermediate

Taking that a step earlier, you could take the 100 most important pages and, once the redirects are in place, request indexing of 50 of the old URLs, through the old domain’s Search Console property, to pick up the redirects, whilst requesting indexing of the remaining 50 through the new domain’s Search Console property to quickly get those pages in the index.

Advanced

This is the ‘let’s try to break it’ option.  50 URLs is nowhere near Bing’s 10k URLs a day, but what if you could actually end up with more than 10k indexed through this technique?

Remember that you can register multiple properties for the same site. As a result there’s an interesting solution where you automatically register Search Console properties for each major folder on your site (up to Search Console’s limit of 400 in a single account) and then use the Live Inspection tool for 50 URLs per property – giving you up to 20k URLs a day – double Bing’s allowance! None of this would be particularly difficult using Selenium/Puppeteer; we’ve previously built out scripts to automatically mass-register Search Console properties for a client that was undergoing a HTTPS migration and had a couple of hundred properties they needed to move over, which went without a hitch. We didn’t use that script to mass request indexing, but, if you did, it could allow for a migration to occur extremely quickly. We don’t recommend doing this – I can’t imagine this is how Google wants you to use this tool, though equally I can’t think how they’d actively penalise you for doing this. Something, perhaps, to try out another day at your own risk. If you do, let me know how it works!

Does Google Crawl With HTTP2 Yet?

HTTP2 is a major revision to the HTTP protocol, agreed in 2015. At this stage it has pretty much universal support across web browsers. As HTTP2 supports multiplexing, server push, binary protocols, stream prioritisation and stateful header compression it is, in almost all instances, faster than HTTP1.1 and so implementing it can provide a relative ‘free’ speed boost to a site. But, how does Google handle HTTP2? We thought we’d find out.

The History

As HTTP2 is backwards compatible – if a browser doesn’t support HTTP2, HTTP1.1 is used instead – Google could read pages on HTTP2 sites the very first day the specification was built. Yet this backwards compatibility actually makes it difficult to tell whether Google is actually using HTTP2 or not. Mueller confirmed in 2016 that Googlebot wasn’t using HTTP2 yet:

A year later, in 2017, Bartoz asked the same question and found that googlebot still wasn’t using HTTP2:

Two years later, much has changed, HTTP2 is now used by over a third of websites and that figure is growing by about 10% YoY:

So, we thought we’d revisit the question. The setup was a little time consuming, but simple. We set up an Apache server with HTTPS and then HTTP2 support and made sure that Google could crawl the page normally:

Once we knew that was working, we edited the .htaccess file to block all http1.* traffic:

This time, when we requested Google recrawl the page, we received a crawl anomaly:

..and so no, googlebot still, somehow, does not support HTTP2. We wanted to see how Google would render the page as well, though. The assumption was that whilst Googlebot did not support HTTP2, WRS/Caffeine is based on Chrome 41, and Chrome 41 supports HTTP2, so, therefore, WRS should too. As a result, we changed the .htaccess file to, instead, redirect all HTTP 1.1 traffic to another test page:

We then used PageSpeed insights to see how Google would render the page:

You may just about notice the preview in the bottom left is of a page with the headline ‘This page has been redirected’ – it loaded test.html rather than the homepage! So, for whatever reason, we must presume, that Google has hobbled WRS so that it always pulls the HTTP1.1 version of a page, which, in 99.999999% of cases would be identical to the HTTP2 version of the page.

Why does this matter?

This is interesting for two reasons:

1) It implies a very interesting way to cloak that I’ve not heard people talk about before. Internet Explorer has supported HTTP2 since 2013, whilst Firefox, Safari and Chrome have each had support since 2015. If you set certain content to only show for users with HTTP1.1 connections, as modern browsers all support HTTP2, effectively zero users would see it, but Google would. As with all cloaking, I don’t actually recommend this – and Google could add HTTP2 support at any time – but as ways of cloaking go, it would be difficult to detect as, effectively, nobody’s looking for it right now.

2) Due to HTTP2’s multiplexing abilities, there are several site speed recommendations that are different for HTTP1.1 than HTTP2 including, for example, spriting. Now that we know Google is not using HTTP2 even when your server supports it, by optimising your page for speed based on users using HTTP2 you might actually be slowing down the speed at which Google (still using HTTP1.1) crawls the page. Whilst Google’s tight-lipped on what mechanism, exactly, they use to measure site speed, the page load speed that googlebot itself finds will, of course, go to determining crawl budget. So if you’re really caring about crawl budget and are about to move to HTTP2 then you’ve got an interesting problem. This is simply one of those cases where what’s best for Google is not what’s best for the user. As such, there’s a reasonable case to be made that you shouldn’t prioritise, or potentially even bother implementing, any site speed changes that are different for HTTP1.1 than HTTP2 when moving to HTTP2 – at least until Google starts crawling with HTTP2 support… or you could combine both approaches and change the way you’re delivering CSS, icons etc based on the HTTP version that’s supported by the requested user, which is a lot of extra work, but technically optimal. That is, as we all know, the best kind of optimal.

Mostly, it’s just interesting though.