Seo, in its many basic sense, relies upon something above all others: Search engine spiders crawling and indexing your site.
But almost every site is going to have pages that you don’t want to consist of in this exploration.
In a best-case circumstance, these are not doing anything to drive traffic to your website actively, and in a worst-case, they could be diverting traffic from more important pages.
Fortunately, Google permits web designers to tell search engine bots what pages and content to crawl and what to overlook. There are several ways to do this, the most typical being using a robots.txt file or the meta robotics tag.
We have an outstanding and in-depth description of the ins and outs of robots.txt, which you must absolutely check out.
However in high-level terms, it’s a plain text file that resides in your site’s root and follows the Robots Exemption Procedure (ASSOCIATE).
Robots.txt offers crawlers with directions about the site as an entire, while meta robots tags include instructions for specific pages.
Some meta robotics tags you may utilize include index, which tells search engines to add the page to their index; noindex, which informs it not to include a page to the index or include it in search engine result; follow, which instructs an online search engine to follow the links on a page; nofollow, which informs it not to follow links, and a whole host of others.
Both robots.txt and meta robotics tags work tools to keep in your tool kit, but there’s likewise another way to instruct search engine bots to noindex or nofollow: the X-Robots-Tag.
What Is The X-Robots-Tag?
The X-Robots-Tag is another way for you to control how your web pages are crawled and indexed by spiders. As part of the HTTP header action to a URL, it controls indexing for an entire page, as well as the particular elements on that page.
And whereas utilizing meta robotics tags is relatively simple, the X-Robots-Tag is a bit more complicated.
However this, naturally, raises the concern:
When Should You Utilize The X-Robots-Tag?
According to Google, “Any regulation that can be utilized in a robots meta tag can likewise be defined as an X-Robots-Tag.”
While you can set robots.txt-related directives in the headers of an HTTP action with both the meta robotics tag and X-Robots Tag, there are certain situations where you would want to use the X-Robots-Tag– the two most typical being when:
- You want to control how your non-HTML files are being crawled and indexed.
- You wish to serve instructions site-wide instead of on a page level.
For example, if you wish to block a specific image or video from being crawled– the HTTP reaction approach makes this simple.
The X-Robots-Tag header is likewise helpful since it enables you to combine multiple tags within an HTTP action or utilize a comma-separated list of regulations to specify instructions.
Perhaps you don’t want a specific page to be cached and want it to be not available after a specific date. You can utilize a combination of “noarchive” and “unavailable_after” tags to advise search engine bots to follow these directions.
Basically, the power of the X-Robots-Tag is that it is much more flexible than the meta robotics tag.
The benefit of using an X-Robots-Tag with HTTP responses is that it enables you to utilize regular expressions to perform crawl instructions on non-HTML, along with use criteria on a larger, global level.
To assist you comprehend the difference between these regulations, it’s useful to classify them by type. That is, are they crawler directives or indexer directives?
Here’s an useful cheat sheet to explain:
|Crawler Directives||Indexer Directives|
|Robots.txt– uses the user agent, allow, disallow, and sitemap directives to specify where on-site search engine bots are allowed to crawl and not allowed to crawl.||Meta Robotics tag– permits you to specify and prevent search engines from revealing particular pages on a site in search engine result.
Nofollow– permits you to specify links that ought to not hand down authority or PageRank.
X-Robots-tag– enables you to control how specified file types are indexed.
Where Do You Put The X-Robots-Tag?
Let’s state you want to obstruct particular file types. An ideal approach would be to include the X-Robots-Tag to an Apache configuration or a.htaccess file.
The X-Robots-Tag can be contributed to a website’s HTTP responses in an Apache server setup via.htaccess file.
Real-World Examples And Utilizes Of The X-Robots-Tag
So that sounds fantastic in theory, but what does it look like in the real life? Let’s take a look.
Let’s say we wanted online search engine not to index.pdf file types. This setup on Apache servers would look something like the below:
In Nginx, it would appear like the listed below:
area ~ * . pdf$ add_header X-Robots-Tag “noindex, nofollow”;
Now, let’s look at a various situation. Let’s say we wish to use the X-Robots-Tag to block image files, such as.jpg,. gif,. png, etc, from being indexed. You might do this with an X-Robots-Tag that would appear like the below:
Please note that understanding how these instructions work and the impact they have on one another is crucial.
For instance, what happens if both the X-Robots-Tag and a meta robotics tag lie when spider bots find a URL?
If that URL is obstructed from robots.txt, then specific indexing and serving regulations can not be discovered and will not be followed.
If regulations are to be followed, then the URLs containing those can not be prohibited from crawling.
Check For An X-Robots-Tag
There are a few different techniques that can be utilized to check for an X-Robots-Tag on the website.
The easiest way to inspect is to set up a browser extension that will inform you X-Robots-Tag info about the URL.
Screenshot of Robots Exclusion Checker, December 2022
Another plugin you can use to figure out whether an X-Robots-Tag is being utilized, for instance, is the Web Designer plugin.
By clicking on the plugin in your internet browser and browsing to “View Action Headers,” you can see the various HTTP headers being utilized.
Another approach that can be utilized for scaling in order to identify issues on sites with a million pages is Shouting Frog
. After running a site through Screaming Frog, you can navigate to the “X-Robots-Tag” column.
This will reveal you which areas of the site are using the tag, in addition to which particular instructions.
Screenshot of Shouting Frog Report. X-Robot-Tag, December 2022 Using X-Robots-Tags On Your Site Understanding and controlling how search engines engage with your website is
the cornerstone of seo. And the X-Robots-Tag is an effective tool you can utilize to do just that. Just understand: It’s not without its threats. It is extremely simple to slip up
and deindex your whole website. That stated, if you’re reading this piece, you’re probably not an SEO beginner.
So long as you use it carefully, take your time and inspect your work, you’ll discover the X-Robots-Tag to be an useful addition to your arsenal. More Resources: Included Image: Song_about_summer/ Best SMM Panel