This guide helps you identify and fix the problems with JavaScript, which can block your page or certain content on the pages using JavaScript from display in searching for Google. Although Google Search uses JavaScript, there are some features and restrictions that it is important to consider when designing pages and applications to take into account how Google robots gain access and renders.
Googlebot is designed to be a good citizen of the network. The main task of Googlebot is indexing, while it should not worsen user experience when visiting the site. Googlebot and its WEB RENDERING SERVICE (WRS) component are constantly analyzing and identify resources that do not contribute to the main page content, and may not load such resources. For example, requests for reporting and errors that are not related to basic content, as well as other similar types of requests may not be used to extract basic content. Client analytics may not provide a complete or accurate picture of Googlebot and WRS on your site. Use a report on the statistics of a bypass in Google Search Console to track the activity of Googlebot and WRS on your site and receive feedback.
If you suspect that JavaScript problems can block your page or content on the JavaScript page from the appearance in search of Google, take the following steps. If you are not sure if JavaScript is the main reason, use our general debugging guide to determine a specific problem.
We also recommend collecting and checking JavaScript errors that users, including Googlebot, on your site to identify possible problems that affect content rendering. Here is an example of how JavaScript errors can be logged in, which are recorded in the Global Onerror:
window.addEventListener('error', function(e) {
var errorText = [
e.message,
'URL: ' + e.filename,
'Line: ' + e.lineno + ', Column: ' + e.colno,
'Stack: ' + (e.error && e.error.stack || '(no stack trace)')
].join('
');
var DOM_ID = 'rendering-debug-pre';
if (!document.getElementById(DOM_ID)) {
var log = document.createElement('pre');
log.id = DOM_ID;
log.style.whiteSpace = 'pre-wrap';
log.textContent = errorText;
if (!document.body) document.body = document.createElement('body');
document.body.insertBefore(log, document.body.firstChild);
} else {
document.getElementById(DOM_ID).textContent += '
' + errorText;
}
var client = new XMLHttpRequest();
client.open('POST', 'https://example.com/logError');
client.setRequestHeader('Content-Type', 'text/plain;charset=UTF-8');
client.send(errorText);
});
To prevent "soft 404" errors in one -page applications (SPA), this can be especially difficult. To avoid indexing pages with errors, you can use one or both of the following strategies:
When the SPA uses the client JavaScript to process errors, they often report the code 200, and not the right condition code, which can lead to indexation of pages with errors, which can then get into search results.
Expect that Googlebot will reject user resolution requests. The functions that require permission from the user do not make sense for Googlebot, since he is not provided with a camera or other devices. Instead, provide access to the content without the enforcement of the user to use these devices.
Sing -page applications (SPA) can use URL fragments (for example, https://example.com/#/products) to download various representations. However, the Ajax-Crawling scheme has been outdated since 2015, so you can not rely on URL fragments to work with Googlebot. We recommend using History API to download various SPA content.
WRS uploads each URL (see the section on how Google Search works), following the server and client redirects, as a regular browser. However, WRS does not maintain the condition between pages loading:
Googlebot is actively caching content to reduce the number of network queries and uploading resources. WRS can ignore caching headlines. This can lead to the use of outdated JavaScript or CSS resources. To avoid this problem, use content prints, including part of the file name, such as Main.2BB85551.JS. The print depends on the contents of the file, so the updates create a new file with another name.
Make sure your application uses the detection of functions for all critical APIs that it requires and provides spare behavior or polyphillas where it is necessary. Some web functions may not be supported by all user agents, and Googlebot may not support functions such as WebGL.
Googlebot uses HTTP checks to obtain content from your server. It does not support other types of connections such as WebSockets or WebRTC. In order to avoid problems with such compounds, provide HTTP reserve to obtain content and use reliable error processing.
Make sure your web components are correctly rendered. Use the Rich Results Test tool or the URL check tool in Search Console to check that the code displays the entire expected content. WRS simplifies work with Light Dom and Shadow Dom. If web components do not use the mechanism
After you fixed all the elements from this checklist, test your page using Rich Results Test or the URL check tool in Search Console. If the error was eliminated, a green checkmark will appear, and errors will no longer be displayed. If errors remain, contact the Search Central support community.
If you have questions, you can contact our SEO company "SEO.computer" by email: info@seo.computer, WhatsApp: +79202044461.
ID 87