Everything You Need To Know About The X-Robots-Tag HTTP Header

Posted by

Search engine optimization, in its the majority of standard sense, relies upon something above all others: Online search engine spiders crawling and indexing your website.

But nearly every website is going to have pages that you don’t wish to consist of in this expedition.

For example, do you truly desire your personal privacy policy or internal search pages showing up in Google results?

In a best-case situation, these are doing nothing to drive traffic to your site actively, and in a worst-case, they might be diverting traffic from more crucial pages.

Fortunately, Google permits webmasters to inform online search engine bots what pages and content to crawl and what to neglect. There are several ways to do this, the most typical being utilizing a robots.txt file or the meta robotics tag.

We have an outstanding and comprehensive explanation of the ins and outs of robots.txt, which you need to definitely read.

However in top-level terms, it’s a plain text file that lives in your website’s root and follows the Robots Exclusion Protocol (REPRESENTATIVE).

Robots.txt offers crawlers with instructions about the website as a whole, while meta robotics tags include directions for specific pages.

Some meta robotics tags you may employ include index, which informs online search engine to add the page to their index; noindex, which tells it not to include a page to the index or include it in search results page; follow, which advises a search engine to follow the links on a page; nofollow, which tells it not to follow links, and an entire host of others.

Both robots.txt and meta robotics tags work tools to keep in your tool kit, but there’s also another method to advise search engine bots to noindex or nofollow: the X-Robots-Tag.

What Is The X-Robots-Tag?

The X-Robots-Tag is another way for you to control how your webpages are crawled and indexed by spiders. As part of the HTTP header action to a URL, it manages indexing for an entire page, in addition to the specific aspects on that page.

And whereas using meta robots tags is fairly uncomplicated, the X-Robots-Tag is a bit more complex.

However this, naturally, raises the question:

When Should You Utilize The X-Robots-Tag?

According to Google, “Any instruction that can be used in a robotics meta tag can likewise be defined as an X-Robots-Tag.”

While you can set robots.txt-related instructions in the headers of an HTTP reaction with both the meta robots tag and X-Robots Tag, there are certain circumstances where you would want to utilize the X-Robots-Tag– the 2 most common being when:

  • You want to manage how your non-HTML files are being crawled and indexed.
  • You wish to serve directives site-wide rather of on a page level.

For example, if you want to block a particular image or video from being crawled– the HTTP reaction approach makes this simple.

The X-Robots-Tag header is also useful since it enables you to combine multiple tags within an HTTP response or use a comma-separated list of instructions to specify directives.

Maybe you don’t desire a specific page to be cached and desire it to be unavailable after a specific date. You can use a combination of “noarchive” and “unavailable_after” tags to advise online search engine bots to follow these instructions.

Basically, the power of the X-Robots-Tag is that it is a lot more flexible than the meta robotics tag.

The advantage of utilizing an X-Robots-Tag with HTTP actions is that it allows you to utilize routine expressions to carry out crawl directives on non-HTML, as well as apply parameters on a bigger, worldwide level.

To help you comprehend the distinction in between these regulations, it’s valuable to classify them by type. That is, are they crawler regulations or indexer regulations?

Here’s an useful cheat sheet to describe:

Spider Directives Indexer Directives
Robots.txt– utilizes the user agent, allow, prohibit, and sitemap regulations to define where on-site search engine bots are allowed to crawl and not permitted to crawl. Meta Robotics tag– permits you to specify and prevent online search engine from showing specific pages on a site in search results.

Nofollow– allows you to define links that must not hand down authority or PageRank.

X-Robots-tag– permits you to control how specified file types are indexed.

Where Do You Put The X-Robots-Tag?

Let’s say you wish to block particular file types. A perfect approach would be to add the X-Robots-Tag to an Apache setup or a.htaccess file.

The X-Robots-Tag can be added to a website’s HTTP reactions in an Apache server setup via.htaccess file.

Real-World Examples And Utilizes Of The X-Robots-Tag

So that sounds great in theory, but what does it appear like in the real world? Let’s have a look.

Let’s state we desired online search engine not to index.pdf file types. This setup on Apache servers would look something like the below:

Header set X-Robots-Tag “noindex, nofollow”

In Nginx, it would appear like the below:

location ~ * . pdf$ add_header X-Robots-Tag “noindex, nofollow”;

Now, let’s take a look at a various circumstance. Let’s say we wish to utilize the X-Robots-Tag to block image files, such as.jpg,. gif,. png, and so on, from being indexed. You might do this with an X-Robots-Tag that would look like the below:

Header set X-Robots-Tag “noindex”

Please keep in mind that comprehending how these directives work and the impact they have on one another is crucial.

For instance, what occurs if both the X-Robots-Tag and a meta robots tag lie when crawler bots discover a URL?

If that URL is obstructed from robots.txt, then specific indexing and serving directives can not be found and will not be followed.

If regulations are to be followed, then the URLs consisting of those can not be disallowed from crawling.

Check For An X-Robots-Tag

There are a couple of various approaches that can be used to look for an X-Robots-Tag on the site.

The most convenient method to check is to set up an internet browser extension that will tell you X-Robots-Tag info about the URL.

Screenshot of Robots Exclusion Checker, December 2022

Another plugin you can utilize to identify whether an X-Robots-Tag is being used, for example, is the Web Developer plugin.

By clicking on the plugin in your internet browser and navigating to “View Reaction Headers,” you can see the various HTTP headers being utilized.

Another technique that can be utilized for scaling in order to determine issues on websites with a million pages is Shouting Frog

. After running a site through Yelling Frog, you can browse to the “X-Robots-Tag” column.

This will show you which areas of the site are utilizing the tag, along with which particular instructions.

Screenshot of Screaming Frog Report. X-Robot-Tag, December 2022 Using X-Robots-Tags On Your Website Comprehending and managing how search engines communicate with your site is

the cornerstone of seo. And the X-Robots-Tag is an effective tool you can use to do just that. Simply understand: It’s not without its threats. It is extremely simple to make a mistake

and deindex your whole website. That said, if you read this piece, you’re most likely not an SEO beginner.

So long as you use it sensibly, take your time and examine your work, you’ll find the X-Robots-Tag to be a beneficial addition to your arsenal. More Resources: Included Image: Song_about_summer/ Best SMM Panel