YandexAdditional
What is YandexAdditional?
YandexAdditional is a web crawler operated by Yandex, a major Russian technology company and search engine. This bot is part of Yandex's search engine infrastructure, serving as a supplementary crawler to the main Yandex indexing systems. Yandex operates several specialized crawlers that work together to build and maintain their search index, with YandexAdditional focusing on gathering additional information beyond what the primary Yandex crawler collects.
In server logs, YandexAdditional identifies itself with the user-agent string Mozilla/5.0 (compatible; YandexAdditional/3.0; +http://yandex.com/bots)
. The "Additional" in its name indicates its supplementary role in Yandex's crawling ecosystem, working alongside other Yandex bots like the main YandexBot, YandexImages, YandexVideo, and other specialized crawlers.
YandexAdditional typically behaves like other legitimate search engine crawlers, following links throughout websites to discover and index content. It respects standard web protocols and crawling directives, making it distinguishable from malicious bots that might ignore such conventions.
Why is YandexAdditional crawling my site?
YandexAdditional is likely crawling your site to gather supplementary information that complements what Yandex's main crawler has already indexed. This crawler may be looking for updated content, specific types of information, or verifying changes to previously indexed pages.
The frequency of YandexAdditional visits depends on several factors, including your site's popularity, how often your content changes, and your site's relevance to Yandex users (particularly those in Russian-speaking regions where Yandex has a significant market share). Sites with frequently updated content or higher traffic volumes may see more regular visits from this crawler.
YandexAdditional's crawling is generally authorized as part of normal search engine operations. Like other search engine crawlers, it helps Yandex maintain an up-to-date and comprehensive index of the web, which benefits both Yandex users and potentially drives traffic to your website.
What is the purpose of YandexAdditional?
The primary purpose of YandexAdditional is to support Yandex Search by collecting supplementary information about web pages. While the main YandexBot handles core indexing functions, YandexAdditional likely focuses on gathering specific types of data or performing specialized crawling tasks that enhance Yandex's search capabilities.
The data collected by YandexAdditional contributes to Yandex's search index, helping the search engine deliver more relevant and comprehensive results to its users. This crawler may be involved in detecting content changes, gathering metadata, or performing other specialized indexing tasks that improve search quality.
For website owners, having content properly indexed by Yandex can provide value by making the site discoverable to Yandex users, particularly those in Russia and other countries where Yandex has significant market share. This can drive relevant traffic to your website from Yandex search results.
How do I block YandexAdditional?
If you wish to control YandexAdditional's access to your site, you can use the robots.txt file, which this crawler respects. To block YandexAdditional completely, add the following directives to your robots.txt file:
User-agent: YandexAdditional
Disallow: /
This will instruct YandexAdditional not to crawl any part of your website. If you only want to block access to specific directories or files, you can specify those paths instead of using the "/" wildcard:
User-agent: YandexAdditional
Disallow: /private-directory/
Disallow: /confidential-file.html
If you want to block all Yandex crawlers, including YandexAdditional, you can use:
User-agent: Yandex
Disallow: /
Keep in mind that blocking YandexAdditional may affect your site's visibility in Yandex search results, potentially reducing traffic from users of this search engine. This is particularly relevant if your site targets audiences in Russia or other regions where Yandex has significant market share. If you're experiencing excessive crawling that impacts server performance, consider using the Crawl-delay directive in your robots.txt file to limit crawling frequency rather than blocking the crawler entirely.
Operated by
Search index crawler
Documentation
Go to docsAI model training
Acts on behalf of user
Obeys directives
User Agent
Mozilla/5.0 (compatible; YandexAdditional/3.0; +http://yandex.com/bots)