dotbot
What is DotBot?
DotBot is a web crawler operated by Moz, a well-known SEO software company that provides tools for search engine optimization and online visibility. DotBot functions as an indexing and discovery crawler that systematically visits websites to collect data for Moz's suite of SEO tools and analytics. It identifies itself in server logs with the user-agent string DotBot
or variations like dotbot
(lowercase), depending on the specific implementation.
The crawler is designed to analyze website structure, content quality, and link profiles to gather information that helps Moz's customers understand their SEO performance. DotBot is classified as a legitimate web crawler that behaves similarly to other search engine bots, following a systematic approach to discover and index web content. It's programmed to respect standard web protocols and crawling directives.
DotBot is part of Moz's Link Explorer and other SEO tools that provide backlink analysis, domain authority metrics, and competitive insights to digital marketers and website owners.
Why is DotBot crawling my site?
DotBot crawls websites to collect data about their structure, content, and link relationships. It's primarily interested in discovering links between websites to build Moz's link index, which powers their domain authority and page authority metrics. The crawler visits sites to understand how pages connect to each other across the web.
The frequency of DotBot visits depends on several factors, including your site's size, popularity, and how frequently it's updated. Sites with higher domain authority or those that publish new content regularly may experience more frequent crawls. DotBot is triggered to visit websites that are either part of Moz's regular crawling schedule or when Moz users specifically request analysis of certain URLs through their tools.
This crawling is generally considered authorized as it follows standard web protocols and respects robots.txt directives. The data collection supports legitimate SEO analysis rather than scraping for competitive or unauthorized purposes.
What is the purpose of DotBot?
DotBot serves to gather web data that powers Moz's SEO analytics platform. Its primary function is to discover and analyze links between websites, helping to calculate metrics like Domain Authority and Page Authority that predict how well sites will rank in search engines. The crawler also collects information about on-page elements, site structure, and content quality.
The data collected by DotBot is used to provide Moz customers with insights about their backlink profiles, competitive landscape, and potential SEO opportunities. Website owners can benefit from this crawling indirectly, as Moz's tools can help them understand how their site appears to search engines and identify areas for improvement.
For site owners using Moz's tools, DotBot's crawling provides valuable data about their online presence and competitive standing. The bot supports legitimate SEO analysis rather than engaging in content scraping or data harvesting for questionable purposes.
How do I block DotBot?
DotBot respects the robots.txt protocol, making it straightforward to control its access to your site. To block DotBot completely, add the following directives to your robots.txt file:
User-agent: DotBot
Disallow: /
This tells DotBot not to crawl any part of your website. If you want to block it from specific sections while allowing access to others, you can use more targeted directives:
User-agent: DotBot
Disallow: /private-folder/
Disallow: /members-only/
Allow: /
Blocking DotBot means your site won't be included in Moz's link index, which could affect the accuracy of your Domain Authority score and other metrics in Moz tools. If you use Moz's services to monitor your SEO performance, blocking their crawler might limit the insights available to you.
For most legitimate websites, there's typically no need to block DotBot as it follows good crawling practices and provides data that contributes to useful SEO metrics. However, if you're experiencing excessive crawling that impacts server performance, you might consider using the robots.txt directives to limit rather than completely block access.
Operated by
SEO crawler
Documentation
Go to docsAI model training
Acts on behalf of user
Obeys directives
User Agent
DotBot