Semrush SWA

SEMrush site-wide audit crawler.

What does Semrush SWA do?

Semrush SWA (Site-Wide Analysis) is a crawler that discovers and collects web data by visiting URLs from a crawl frontier and following hyperlinks. The collected data feeds Semrush products like Backlink Analytics, Site Audit, and the SEO Writing Assistant. Semrush tools reference and link to source URLs in their reports, so crawled pages can receive in-tool referral traffic from Semrush users researching competitors or auditing sites.

Should I allow and optimize for Semrush SWA to drive organic growth?

Semrush SWA doesn't drive traditional search traffic, but it feeds data into tools used by millions of SEO professionals and marketers. Your pages appear in Semrush reports with clickable links back to source URLs, which means SEO practitioners researching your niche or competitors may discover your site through Semrush's platform. Allowing this crawler ensures your site's backlink profile, content metrics, and audit data stay current in Semrush, which can indirectly benefit your visibility when others analyze your domain or competitive landscape.

Here's how to optimize for Semrush SWA:

  • Allow SemrushBot-SWA in your robots.txt to keep your site data current in Semrush tools
  • Ensure your robots.txt file is at the site root and returns an HTTP 200 status code
  • Use a Crawl-delay value of 10 seconds or less (larger values are reduced to 10)
  • Add a sitemap reference in robots.txt to help Semrush discover all important pages
  • Use clean internal linking so the crawler can follow your site structure efficiently
  • Keep server response times low to avoid adaptive rate-limiting from the crawler

Data Usage & Training

Crawled content powers Semrush SEO and analytics products including Backlink Analytics, Site Audit, SEO Writing Assistant, On-Page tools, and the Plagiarism Checker. Semrush's public documentation does not state whether crawled content is also used to train general-purpose AI models, so that aspect remains unclear.

How Semrush SWA Accesses Content

Here's how Semrush SWA accesses your site and understands your content:

  • Fetches HTML via standard HTTP requests using a crawl frontier approach
  • Follows hyperlinks on crawled pages to discover new URLs
  • Respects robots.txt Disallow and Allow directives
  • Supports Crawl-delay with a practical maximum of 10 seconds
  • Adapts crawl rate to current server load rather than following a fixed schedule
  • JavaScript rendering capability is unknown

Semrush SWA maintains a crawl frontier and revisits pages on an adaptive schedule based on current server load. There is no fixed crawl interval. Crawl-delay directives are respected up to a maximum of 10 seconds.

How to Block or Control Semrush SWA

To block Semrush SWA via robots.txt: User-agent: SemrushBot-SWA Disallow: / Your robots.txt must be at the site root and return HTTP 200. Semrush treats 4xx responses as a missing robots.txt (crawling proceeds unrestricted) and 5xx responses as a signal to stop crawling entirely. IP-based blocking is unreliable because Semrush does not use consecutive IP blocks and does not publish IP ranges. Semrush runs multiple specialized crawlers (SemrushBot-BA, SiteAuditBot, SemrushBot-SI, SemrushBot-SWA), so use the exact user-agent token for the variant you want to block. Contact [email protected] for verification or assistance.

Common Issues & Troubleshooting

Watch out for these common problems when working with Semrush SWA:

  • IP-based blocking is ineffective because Semrush does not use consecutive IP blocks
  • Robots.txt returning 4xx status codes is treated as missing, allowing unrestricted crawling
  • Using incorrect user-agent tokens (e.g., SemrushBot instead of SemrushBot-SWA) won't block this specific crawler variant
  • Crawl-delay values above 10 seconds are silently reduced to 10 seconds
  • Robots.txt placed outside the site root directory is ignored

Quick Reference

Platform
Agent Category
Growth Value
Official Documentation
semrush.com/bot/
User Agent String
semrushbot-swa
robots.txt Entry
User-agent: semrushbot-swa
Disallow: /

See which agents visit your site

Monitor real-time AI agent and bot activity on your site for free with Siteline Agent Analytics

Get started free

Frequently Asked Questions

Similar Agents & Bots

Learn More

Related Resources

💥 Get started

Ready to track Semrush SWA on your site?

Start monitoring agent traffic, understand how AI discovers your content, and optimize for the next generation of search.