Visual Rule Builder
Add allow and disallow rules line by line through a form interface rather than hand-editing raw text. The correct syntax — trailing slashes, proper line breaks, blank lines between agent blocks — is applied automatically.
500+ fast, free tools. Most run in your browser only; Image & PDF tools upload files to the backend when you run them.
Generate a valid robots.txt with user-agent, allow/disallow, sitemap, and host directives.
A robots.txt file is the first thing search engine crawlers check before visiting any page on your site. Get it wrong and you may accidentally block Google from indexing your entire site, or fail to block scrapers from hammering your API endpoints. The rules follow a simple but finicky syntax: User-agent lines select which bots to address, Disallow paths block crawling, Allow paths create exceptions within a blocked directory, and Sitemap lines help crawlers discover your content. A single typo — a missing slash at the end of a directory path, a wrong user-agent spelling, or a misplaced blank line — can have outsized consequences. This tool generates syntactically correct robots.txt files using a visual rule builder. Add rules for multiple user-agents, specify disallow and allow paths, insert sitemap URLs, and optionally set crawl-delay directives — all without memorizing the exact.
Add allow and disallow rules line by line through a form interface rather than hand-editing raw text. The correct syntax — trailing slashes, proper line breaks, blank lines between agent blocks — is applied automatically.
Set different crawling rules for different bots in a single file. Grant Googlebot full access while blocking AhrefsBot or SemrushBot from your entire site with separate user-agent sections.
Embed one or more sitemap URLs at the bottom using the Sitemap: directive. This helps crawlers discover your content more reliably, including Googlebot and Bingbot when they process the robots.txt file.
The generator validates rules for common mistakes: paths without a leading slash, invalid characters in user-agent names, duplicate rules, and an empty Disallow value (which allows everything rather than blocking it).
Support for wildcard patterns like /*?sort= to block parameter-based URL variants without blocking the base URL, keeping crawl budget focused on canonical pages rather than infinite query-string combinations.
The output is correctly formatted plain-text robots.txt content, ready to drop at your domain root. No build step, configuration, or server restart required — save as robots.txt and upload.
Input: User-agent: * | Disallow: /admin/ | Allow: / | Sitemap: https://example.com/sitemap.xml
Output: User-agent: * Disallow: /admin/ Allow: / Sitemap: https://example.com/sitemap.xml
Input: Googlebot: allow all | AhrefsBot: block all
Output: User-agent: Googlebot Allow: / User-agent: AhrefsBot Disallow: / Sitemap: https://example.com/sitemap.xml
Input: User-agent: * | Disallow: /*?sort= | Disallow: /*?filter=
Output: User-agent: * Disallow: /*?sort= Disallow: /*?filter=