Using the attribute noindex Allows you to block the indexing of pages on your site in search engines such as Google. This attribute can be installed as through <meta> The tag in the page code, and through the headers of the http answer so that the pages do not fall into the results of the search in Google. This is important when it is necessary to hide content that should not be available in search engines, but not block it for users.
Rule noindex It is reported by search engines such as Google that the page should not index and display in the search results. Googlebot extracts this attribute from the page of the page or http -answer and excludes it from the index. However, in order for this rule to work, the page should be available for the search robot, otherwise the page may still appear in the search results if other pages refer to it.
There are two ways to implement noindex: through <meta> Tag and through HTTP set. Both methods act the same way, and the choice depends on the convenience for your site. It is important to remember that the indication noindex V robots.txt Google is not supported.
<meta> Tag for your site in GoogleIn order to exclude the page of your site from indexing by search engines, add the next <meta> Tag in section <head> Your page:
<meta name="robots" content="noindex">If you need to prevent indexing only for Googlebot, use the following tag:
<meta name="googlebot" content="noindex">It is important to note that different search engines can interpret the rule noindex differently. This means that perhaps your page will still appear in the results of the search for other search engines.
Except <meta> Tag, you can configure the HTTP head title using the X-Robots-Tag directive. This allows you to control indexation even for non-resources, such as PDF, video or images. An example of a header HTTP answer with a X-Robots-Tag directive:
HTTP/1.1 200 OK
X-Robots-Tag: noindexIt will also allow you to block the indexation of other types of files, such as images if they are posted on your site.
If your page is still displayed in the search results, despite the installation of the rule noindexPerhaps Googlebot has not yet indexed your page after the changes made. To speed up the process, you can request re -scanning through the URL Inspection tool in the Google Search Console.
Another reason why the page may not be excluded from indexation is to block it through the file robots.txtwho prevents Googlebot to access the page. In this case, you need to edit the file robots.txt And make sure that access to the page is allowed.
To check if you set the rule correctly noindex On the page, use the URL Insption tool in Google Search Console. This will allow you to see which tags and headlines were received by Googlebot when going around the page of your site in Google.
If you have questions about setting up indexing and search engine optimization on your site, you can contact the SEO company "SEO.COMPUTER" to obtain consultation and assistance. Write on info@seo.computer or contact whatsApp by number +79202044461.
ID 93