🤖 Robots.txt Generator
Generate a custom robots.txt file to control how search engines crawl your website. Use templates or configure manually to block specific pages, set crawl delays, and specify sitemaps.
📖 Robots.txt Guide
- User-agent: Specifies which crawler the rules apply to (* = all)
- Disallow: Paths that crawlers should not access
- Allow: Paths that are explicitly allowed (overrides Disallow)
- Sitemap: Location of your XML sitemap for better indexing
- Crawl-delay: Minimum delay between requests (not supported by all bots)
❓ Frequently Asked Questions
What is a robots.txt file?
Robots.txt is a text file placed in your website's root directory that tells search engine crawlers which pages they can or cannot access.
Where should I place robots.txt?
Place robots.txt in your website's root directory (e.g., https://example.com/robots.txt). It must be in the root to be recognized by crawlers.
What should I block in robots.txt?
Block admin areas, private directories, duplicate content, and pages you don't want indexed. Don't block CSS/JS as Google needs them to render pages.
Does robots.txt guarantee blocking?
No. Robots.txt is a suggestion, not enforcement. Well-behaved bots follow it, but malicious bots may ignore it. Use password protection for sensitive content.