Menu

Robots.txt Generator

Web & SEO Tools

Generate robots.txt file

Example / Use Case

Create Robots.txt for E-commerce Site

An e-commerce website owner needs to prevent search engines from indexing checkout and admin pages while ensuring product pages are crawled.

Input

Domain: mystore.com | Block: /admin, /cart, /checkout | Allow: /products, /blog | Include Sitemap: yes

Output

User-agent: *
Disallow: /admin/
Disallow: /cart/
Disallow: /checkout/
Allow: /products/
Allow: /blog/
Sitemap: https://mystore.com/sitemap.xml

How It Works

The robots.txt file uses the Robots Exclusion Protocol to communicate with web crawlers. Each directive consists of a user-agent line followed by rules. Key directives: Disallow specifies paths crawlers cannot access, Allow specifies accessible paths (used for allowing access to subdirectories within disallowed directories), and Sitemap points to your XML sitemap location. Common user-agents include Googlebot, Bingbot, or * for all bots. The file must be placed in the root domain. Remember: robots.txt controls crawling, not indexing. For indexing control, use noindex meta tags. For creating XML sitemaps, use our Sitemap XML Generator.

How to Use

  1. 1Enter your website domain URL
  2. 2Select common crawl rules (allow/disallow)
  3. 3Add custom disallow paths for admin, login, or private pages
  4. 4Optionally add your sitemap URL
  5. 5Download or copy the generated robots.txt file
  6. 6Upload the file to your website root directory

Frequently Asked Questions

It's not required but highly recommended. Without it, search engines will crawl everything, which may waste crawl budget on unimportant pages.

Related Tools