
This tool offers a wide range of functions, from checking the correctness of meta tags to analyzing micro markup, which makes it indispensable for detailed analysis of sites of various sizes.
In this guide, we'll look at the basic SEO Spider settings and features that will help you analyze and improve your site effectively.
Open the menu File → Settings → Memory Allocation.
It is recommended to allocate 8 GB of memory for large projects, or half of the available memory.

Go to File → Settings → Storage Mode.
The default is Memory Storage. It is recommended to switch to Database Storage (hard drive), especially if the drive is SSD, to speed up the scanning process.

In the settings (File → Settings → Proxy) you can set a proxy server if your IP is blocked on a certain site, which will help bypass the blocking.

Go to Configuration → Crawl Config → Speed.
The optimal value for Max Threads is up to 5. Increasing the number of threads can lead to blocking or overloading the site during scanning.
Leave the Limit URL/s parameter at 2.0 URLs per second. For 5XX errors, reduce the value to 1.0 or 0.5 for stability.

Go to Configuration → Crawl Config → Spider → Crawl.
If you want to scan only a selected section of the site, uncheck the "Check Links Outside of Start Folder" option. Enabling "Crawl Outside of Start Folder" expands the crawling area to all links on the site.
If necessary, activate the "Crawl All Subdomains" option to crawl subdomains, and also configure crawling links with the nofollow attribute.

Don't forget to enable the "Crawl Linked XML Sitemaps" option to analyze pages that may not have direct links, but are listed in the sitemap.

This mode imitates the behavior of search engines: from the main page, the site is crawled using internal links, going deeper through the levels of the structure.
You can upload a file with URLs to crawl, enter them manually, or use a link to the XML Sitemap.

This mode analyzes only Title and Description. The mode is used less frequently and is suitable for simple metadata checks.
Allows you to compare results from different scans, useful for tracking changes or troubleshooting errors.
Go to Configuration → Crawl Config → Content → Duplicates.
When the "Only Check Indexable Pages for Duplicates" option is enabled, the program will search for duplicates only among pages available for indexing.
The "Enable Near Duplicates" option allows you to set the content match percentage, which helps you find hidden duplicates.

You can see the results in the Content → All section, sorting pages by word count. Pages with little content may be less useful to search engines.

Particularly useful for sites with product catalogs, where you can identify pages with insufficient unique content. Define the correct classes or ids for product cards and set up filtering.

To check the availability of analytical counters (for example, Yandex.Metrica or Google Analytics), you can use the search mode by page code by inserting the counter number into the search settings.

After scanning the site, you can view meta tags and titles in the Page Title section. It is important to check points such as:

We look at the results in the Response Codes section. It is important to pay attention to errors like 4xx and 5xx; they must be corrected so that the pages load correctly.