ClarityBot
Microsoft Clarity bot for SEO and verification checks.
What does ClarityBot do?
ClarityBot is seoClarity's site and page audit crawler that scans pages to identify technical SEO issues and analyze content. Its crawl data feeds the seoClarity platform's auditing and monitoring features, including SiteClarity crawls, Bot Clarity, log-file analysis, and related SEO products. It does not drive referral traffic to your site.
Should I allow and optimize for ClarityBot to drive organic growth?
ClarityBot does not drive referral traffic or generate citations. However, if you are an seoClarity customer, allowing it is essential for your SEO auditing and monitoring workflows. Even if you are not a customer, the crawl data contributes to seoClarity's broader SEO intelligence products. Blocking it has no direct impact on your search visibility, but allowing it is low-risk since it respects robots.txt and crawl-delay directives.
Here's how to optimize for ClarityBot:
- Allow claritybot in your robots.txt if you are an seoClarity customer
- Use Crawl-delay directives to manage server load if needed
- Ensure your pages return proper HTTP status codes so seoClarity audits reflect accurate site health
- Keep your sitemap.xml up to date to help seoClarity discover all relevant pages
- Avoid blocking ClarityBot with overly broad robots.txt rules that target all bots
Data Usage & Training
Content crawled by ClarityBot is not used for AI model training. Crawled data is used exclusively for technical site audits, content analysis, log-file analysis, and site performance monitoring within the seoClarity platform for its customers.
How ClarityBot Accesses Content
Here's how ClarityBot accesses your site and understands your content:
- Fetches HTML via standard HTTP requests using the user-agent string Mozilla/5.0 (compatible; ClarityBot/9.0; +https://www.seoclarity.net/bot.html)
- Respects robots.txt Disallow and Allow directives
- Supports Crawl-delay directives
- Crawls are triggered on-demand by seoClarity clients or run on client-configured automated schedules
- Does not perform continuous indiscriminate crawling
ClarityBot crawls on-demand when triggered by seoClarity clients, or on automated daily schedules configured by those clients. It does not crawl continuously or indiscriminately.
How to Block or Control ClarityBot
To block ClarityBot, add the following to your robots.txt:
User-agent: claritybot
Disallow: /
IP-based blocking is possible but not recommended if you are an seoClarity customer, as it will break your auditing workflows. No published IP ranges are available. If you need to adjust crawl behavior without fully blocking, use the Crawl-delay directive or contact seoClarity's Client Success team to configure alternate user-agents or crawl settings.
Common Issues & Troubleshooting
Watch out for these common problems when working with ClarityBot:
- Accidentally blocking
claritybotvia broad robots.txt rules prevents seoClarity-managed crawls and automated analysis - IP-based blocking can silently break seoClarity audits without clear error messages in the platform
- No published IP ranges make IP verification difficult
- JavaScript rendering support is unknown, so heavily JS-dependent pages may not be fully audited
- Coordinating crawl windows with seoClarity may be necessary for large sites to avoid server load spikes
Quick Reference
claritybotUser-agent: claritybot
Disallow: /See which agents visit your site
Monitor real-time AI agent and bot activity on your site for free with Siteline Agent Analytics
Frequently Asked Questions
Similar Agents & Bots
Learn More
Related Resources
Ready to track ClarityBot on your site?
Start monitoring agent traffic, understand how AI discovers your content, and optimize for the next generation of search.



