What does DotBot do?
DotBot is Moz's web crawler that gathers data for the Moz Link Index. It feeds products like Link Explorer, the Moz Links API, and the Links section of Moz Pro campaigns. Because DotBot indexes backlink data that SEO professionals use to discover and evaluate sites, allowing it can indirectly drive referral traffic when your site appears in link analysis reports.
Should I allow and optimize for DotBot to drive organic growth?
DotBot doesn't send users directly to your site, but the data it collects powers Moz's Link Explorer and Links API. SEO professionals use these tools to discover sites, evaluate backlink profiles, and identify link-building opportunities. If your site appears in these reports, it can lead to inbound links and organic discovery. Blocking DotBot means your backlink data may be incomplete or missing in Moz's tools, which could reduce your visibility to SEO practitioners researching your niche.
Here's how to optimize for DotBot:
- Allow DotBot in your robots.txt to ensure your backlink profile is fully indexed in Moz's tools
- Use Crawl-delay to throttle DotBot if it causes server load issues rather than blocking it outright
- Ensure your most important pages are internally linked so DotBot can discover them through link traversal
- Add a sitemap to help search crawlers (including DotBot) find key pages, though sitemap support isn't explicitly documented for DotBot
- Keep your robots.txt updated, but be aware that DotBot only re-checks it on subsequent indexing passes
Data Usage & Training
Crawled content is used to build Moz's Link Index and power products like Link Explorer and the Moz Links API. Moz's public documentation does not state whether DotBot crawl data is also used for AI model training. If this matters to you, contact Moz directly at [email protected].
How DotBot Accesses Content
Here's how DotBot accesses your site and understands your content:
- Fetches HTML via standard HTTP requests
- Identifies itself with the user-agent string: Mozilla/5.0 (compatible; DotBot/1.2; +https://opensiteexplorer.org/dotbot; [email protected])
- Checks robots.txt on first encounter during a new index crawl
- Re-evaluates robots.txt on subsequent indexing passes when links to the site are discovered
- Supports Crawl-delay directives
DotBot performs periodic index-based crawls. It checks your robots.txt on its first visit during a new index crawl and re-evaluates it on later passes when it discovers new links pointing to your site. Changes to robots.txt may not take effect immediately.
How to Block or Control DotBot
To block DotBot via robots.txt:
User-agent: dotbot
Disallow: /
To throttle instead of blocking:
User-agent: dotbot
Crawl-delay: 10
Moz does not publish static IP ranges, so IP-based blocking is unreliable. Server-level blocking via .htaccess or firewall rules (returning 403 or 410) will work but requires matching on the user-agent string. Robots.txt changes may not take effect immediately since DotBot only re-checks robots.txt on later indexing passes.
Common Issues & Troubleshooting
Watch out for these common problems when working with DotBot:
- Robots.txt changes are not picked up immediately;
DotBotonly re-reads robots.txt on subsequent index crawls when it rediscovers links to your site - No static IP ranges are published, making IP-based blocking brittle and unreliable
- User-agent-based blocking works but is susceptible to spoofing by other bots impersonating
DotBot - Aggressive crawl rates can cause server load on smaller sites; use Crawl-delay to mitigate
- Blocking
DotBotmay cause incomplete or missing backlink data in Moz Pro and Link Explorer
Quick Reference
dotbotUser-agent: dotbot
Disallow: /See which agents visit your site
Monitor real-time AI agent and bot activity on your site for free with Siteline Agent Analytics
Frequently Asked Questions
Similar Agents & Bots
Learn More
Related Resources
Ready to track DotBot on your site?
Start monitoring agent traffic, understand how AI discovers your content, and optimize for the next generation of search.


