The meta robots tag is said to be one of the essential SEO tags to audit your pages! After all, your page is tagged with the robot’s meta tag “noindex,” for example, no matter how valuable, complete and practical your content is, it won’t show up on Google.
The name seems difficult, but it believes me. This goal is much simpler to validate and understand than it sounds.
Within this context, we intend to tell you a little more about the robot’s meta tag, how to use it, and the most common commands you can use to have greater control over the indexing of your pages on Google! In the end, we will give you some tips for not misusing the meta tag and in which cases you can apply each command.
What Is Meta Robots Tag?
Robots directives are pieces of code – simple – added to your website to direct robots like Google on crawling and indexing your website’s contents. The robot’s meta is part of this directive and is HTML code added to the head of the pages, as shown above! Besides it, there is another one called Robots.txt, which is a file added to the site’s root to communicate with search engine robots.
The difference between the two is that the first – the subject of this post – serves to control your content’s index on Google and other search engines. The second controls the tracking, that is, which website content you allow robots to access. Both are very confused. After all, they have a similar name and similar purposes. But make no mistake, its features are different.
The robots.txt file can prevent a URL from being indexed simply because Google can’t access its content, but that doesn’t mean it’s blocked from appearing in the results. If you have a link on the site to it, for example, it can be indexed “by accident.” The purpose of the file is to keep Google from wasting time on useless pages on your site, to avoid crawling issues, and let Google focus on what matters! The robot’s meta tag directly controls how and whether your page appears in search results.
“How” vs “If”
What Do You Mean by “How” and “If”? Let’s find out.
How?
Through meta robots, you can, for example, control the size that Google should display your images in places like Google Discover. You can also block your pages from appearing in featured snippets and various other possibilities.
If
With the directive index or noindex – let me explain each one better – you can control whether Google can show your page on SERP or not!
Explaining Meta Robots Names and Directives
Let’s break a standard meta robots code into parts so you can better understand how it works:
The “name” field, despite frequently appearing as “robots,” can be changed so that the target is directed to specific robots, in particular cases, such as:
name=”Googlebot-Image”
The “content” field is where the magic of the goal of the robot happens …You need to be familiar with each of the policies that can be added to handle the goal. Find out now what are the main directives for controlling the Google robot:
Note: it is noteworthy that these directives are not exclusive and can be worked together by separating commas, for example: “index , nofollow” – in this case, we allowed the page to be indexed, but its links cannot be tracked for Google.
All
It is the default value of the policy: index, follow.
In other words, by default, if you don’t have anything filled in the meta, the robots consider that the page can be indexed and have its links followed! No need to use this: meta name=”robots” content=”all”
Noindex
The famous meta robots noindex . This is one of the most important directives, as it defines whether your page can be indexed on Google or not. In this case – noindex -, you block your page from appearing in the search results of Google and other search engines. meta name=”robots” content=”noindex”
Nofollow
It prevents robots from tracking (accessing) links present on the page. So, for example, if you add this directive in this post, all the links you put in it won’t be followed by Google when you go here.
meta name=”robots” content=”nofollow”
None
It combines noindex and nofollow (equivalent to adding: “noindex,nofollow” ). That is, with this directive, Google cannot index or follow the links on the page. meta name=”robots” content=”none”
Noarchive
This policy prevents Google from displaying a cached link in search results.
What is Cache?
It’s an information repository, where, in Google’s case, pages are saved in Google’s repository so that users have quick access to it, even if it’s down.
The cache policy can be implemented on your site independently of Google, so this policy can be useful on sites that prefer to display their versions of cached pages. meta name=”robots” content=”noarchive” It tells Google not to display a snippet of text or preview videos in search results.
Nosnippet
For example, it prevents Google from displaying your pages’ meta descriptions in search results.
meta name=”robots” content=”nosnippet”
Unavailable_After: [date/time]
Allows you to set a date/time limit on when the page can be indexed on Google.
After the set date, the page can no longer appear in search results (useful for landing pages for temporary releases, for example):
meta name=”robots” content=”unavailable_after: 2021-06-05″
How to Validate the Goal of the Robot
Here are the steps to validate the goal of the robot.
- Navigate to Page: You can go to the page you want to validate and press CTRL + U, then CTRL + F and search for “ robots ” to see if and how the tag is appearing on the page.
- Use Extensions: You can use a Chrome extension for SEO, such as SEO META, in 1 CLICK.
- Validate: To validate at scale, it is ideal to use a tracking tool such as Screaming Frog.
How to Implement the Robots Meta Tag?
This is a very simple HTML markup to add to the head of the pages, as in the example:
<head>
<title>Título da página</title>
<meta name="description" content="description page" />
<meta name="robots" content="follow, index, max-snippet:-1, max-video-preview:-1, max-image-preview:large" />
<link rel="canonical" href="url" />
</head>
To implement it, add it with the pattern above to all the pages you want for indexing in the way Google sees fit for users! But How do You do It? The answer is: it honestly depends. It depends on your CMS! Many already have it as default. Others don’t. It’s up to you to validate if it’s there and understand with a dev how it can add the markup to the pages’ HTML or yourself.
What is X-Robots-Tag?
It’s not just the meta robots tag that controls indexing. There is another one, X-Robots-Tag, which has the same function and directives. The only difference is in the form of implementation. While the meta tag is a markup in the page’s HTML code, the X-Robots-Tag is an HTTP response – like the 301 redirect – sent by the website’s server.
It is useful in some specific cases, mainly to block the indexing of files like PDFs or images, for example.
To validate the X-Robots-Tag, the process is slightly different, as you should see the HTTP response from the page.
You can use the screaming frog to validate it to scale. For a quick check on the page, you need to use some Chrome extensions.
7 Use Cases of Meta Robots Tag
Learn how to use this tag with examples of the most common cases where it is useful:
- Don’t index pages with no value to the user: if the page doesn’t make much sense to appear to the user, you can add the noindex tag.
- Test pages: You can use the robots noindex meta to prevent pages from a new site still being tested from being indexed in place of your old ones, and Google deems them as duplicate content.
- Block login and admin pages from appearing on Google
- Checkout and thank you pages: it doesn’t make much sense for these to appear on Google, as it would confuse users
Launch and Promotion Pages: When you create launch pages for a new product or promotion, and you still don’t want users to see it, the noindex tag is a good solution! - Internal search pages: This is VERY important. Internal search/search pages are not usually good for indexing on Google. That’s because they tend to be pages without content and considered irrelevant! In addition, you may unintentionally start appearing on Google for unwanted searches if you don’t block those pages from being indexed. The usefulness for the user is good, but for indexing, it doesn’t make sense.
- To appear on Google Discover: we have already explained this, but it’s always good to reinforce. Google Discover is a very interesting source of SEO. To increase its potential there, it is ideal that you add the directive: max-image-preview: large, allowing large images for display.
Meta Robots Tag Guide in Short
Understanding the use of meta robots tag is not that difficult. The description of the usability and use of the meta robots tags above should be enough for you to avoid SEO practice mistakes that will harm you. Want more info on robots and robot.txt? Take a look at our other article!
Frequently Asked Questions About
The robot’s meta tag directly controls how and whether your page appears in search results.
Yes, you can block your pages from appearing in featured snippets.
The Noarchive policy prevents Google from displaying a cached link in search results.
You can implement the cache policy on your site independently of Google.
You can block the indexing of images with the X Robots Tag.
No comments to show.