Robots Meta Tags | Lesson 9/34 | SEMrush Academy

You’ll gain an understanding of search crawlers and how to optimally budget for them.
Watch the full course for free: https://bit.ly/3gNNZdu

0:08 Robots Meta Tag
1:04 Noindex
1:09 Nofollow
1:22 Having multiple meta tags
3:25 Notranslate
3:32 Summary

✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹
You might find it useful:
Tune up your website’s internal linking with the Site Audit tool:
https://bit.ly/2XVxCmL
Understand how Google bots interact with your website by using the Log File Analyzer:
https://bit.ly/3cs0rfC

Learn how to use SEMrush Site Audit in our free course:
https://bit.ly/2Xsb3XT
✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹ ✹

A robots meta tag is a detailed individual site-specific approach to determine how a particular site should be indexed and presented to users in search results. Usually, it goes into the “head” section of your site but it can also be applied using HTTP server headers.

The robots meta tag can either be applied using a global approach, which would mean you’d serve one directive that would be valid for all crawlers – or you’d take a more granular approach and specify a meta robots tag directive, which would only be valid for Bingbot say – but not for Googlebot.

The most commonly used directive is noindex, which essentially means: “Dear search engine, please do not display this URL in search results”.

It is also possible to combine directives, e.g. noindex and nofollow. Noindex means again that this URL will not show up in search results, nofollow means that search engines are not supposed to pass any link equity to any of the links going out from this specific URL. Keep in mind though that Google will still crawl those outgoing links.

Having multiple robots meta tags is also possible. You can therefore have different directives for different user-agents. This could be helpful if you want to control Googlebot-news and its indexation behaviour differently from Googlebot for regular web or smartphone results.

From a practical standpoint, the robots meta tag is almost always the better choice for day-to-day usage, as it can be used in a far more precise way at a per URL level. Also, the robots meta tag does not cause a loss of external linking power, because for URLs blocked by robots.txt in contrast – the link juice would essentially be lost and therefore not passed on. So, robots meta tags do not cause a break in internal and external linking. Generally speaking a proper internal link juice distribution is very hard to get right if lots of pages or even folders are blocked in robots.txt. Ultimately, the big benefit of the meta tag is however, that it reduces the amount of indexed pages to only relevant URLs.

From a more practical standpoint you would use a noindex, especially for URLs with a minimum amount of content on them, or for direct duplicates or just low value & low quality entry pages that cause a bad user experience: these could be internal search results or category pages with very few items on them, duplicated content (e.g. with the print and regular version of an article). Overall, we’re talking low value pages that shouldn’t serve as an entry point for your users in search results.

There are also some less commonly known values, e.g. noarchive or nosnippet that prevent a snippet for this URL from showing up in search results. It is not very useful from a practical point of view though – because for regular websites you always want a snippet for your URL. You can also specify things like notranslate which means for Google not to offer a translation of this site in search results.

In Summary: The two most commonly used directives are noindex and nofollow. Noindex is actually the only thing you really need though because using internal nofollow often causes more problems than it actually resolves. And if you do not want anything to happen or say to restrict crawling, you don’t need to include a robots meta tag at all. If there are robots meta tag directives present, Google will just treat them as index, so don’t waste your time and resources on implementing it at all.

Just in case, to make you aware of pages with valuable content mistakenly blocked by the noindex directive, the SEMrush Site Audit offers you the appropriate check, which we recommend using.

#TechnicalSEO #TechnicalSEOcourse #MetaRobots #SEMrushAcademy

You May Also Like