The Robots.txt file tells search engines which URLs on your site can be available for their indexation. This file is used primarily in order to avoid overloading your site with requests; It is not a means to exclude a web page from the results of the Google search. If you want to exclude a page from the search results in Google, use the Noindex methag or protect the page with a password.
If you use CMS, such as Wix or Blogger, you may not need (or impossible) to edit the Robots.txt file directly. Instead, your CMS can provide a search settings page or another mechanism to specify search engines, whether your page should be scanned.
If you want to hide or open one of your pages for the search engines, look for instructions on changing the page visibility in search engines in your CMS (for example, look for "WIX Hide a page from search engines").
The Robots.txt file is primarily used to control the traffic of search engines to your site, and usually in order to exclude the file from the Google indexation, depending on the type of file:
You can use the Robots.txt file for web pages (, PDF or other formats that Google can read) to control the indexation traffic if you think your server will be overloaded with requests from the Google search robot, or to avoid the pre-long or duplicate pages on your site.
Warning: Do not use the Robots.txt file as a way to hide a web page (including PDF and other text formats supported by Google) from the results of the Google search.
If other pages refer to your page with descriptive text, Google can index the URL without visiting the page. If you want to completely exclude the page from the search results, use other methods, such as password protection or Noindex metatheg.
If your web page is blocked in the Robots.txt file, its URL can still appear in the search results, but the description will not be displayed in the results. Media files, videos, PDF and other files built into the blocked page will also not be scanned if they are not mentioned on other pages that are allowed for indexation. If you see this search result for your page and want to fix it, delete Robots.txt. If you want to hide the page completely from the search in Google, use another method.
You can use the Robots.txt file to control the traffic indexation and to prevent the appearance of images, video and audio files in the results of the Google search. This will not prevent links to your media files from other pages or from users.
You can use the Robots.txt file to block resource files, such as insignificant images, scripts or styles, if you think that pages without these resources will not be significantly changed. However, if the absence of these resources complicates the indexation of the Google robot, do not block them, otherwise Google will not be able to correctly analyze the pages that depend on these resources.
Before creating or editing the Robots.txt file, you should understand the restrictions on this URL locking method. Depending on your goals and the situation, you may need to consider other mechanisms to make sure that your URLs will not be found on the Internet.
Attention: Combining several rules for kraling and indexation can cause a conflict between the rules. Find out how to combine kraling rules with indexing and display.
If you decide that you need a Robots.txt file, find out how to create it. If the file already exists, find out how to update it.
The Robots.txt file should be in the root of your site. Find out how to create a Robots.txt file, see examples and study the Robots.txt file rules.
Using the Robots.txt report, you can easily check whether Google can process your Robots.txt files. Follow these steps to send updated Robots.txt files to Google.
Explore the details of the various rules for the Robots.txt file and how Google interprets the specification of the Robots.txt file.
If you have questions about the Robots.txt file for your Google site or other aspects of SEO, you can contact the SEO company "Seo.computer" by email: info@seo.computer or through WhatsApp: +79202044461.
ID 25