Blog >> Tutorials >> A Definitive Guide To Create Robots.txt

A Definitive Guide To Create Robots.txt

A Definitive Guide To Create Robots.txt: A website is not just a group of files stored on a remote disk. Instead, websites work as fully-fledged systems of their own.

All search engines respect the privacy of websites and they cannot go against the permissions granted by webmasters to them.

Steps To Help Google Bot To Crawl Your Website Effectively

These rules are stored in the Robots.txt file which itself should be located at the root of the website in order to work.

Robots.txt does not contain kind of special code, though you will have to write the rules in a certain format. As a standard, search engines only understand Unix text syntax.

Also Read: SEO Trends

To a normal person writing rules in Create Robots.txt can be hard. But after getting the hang of it, you can easily write the rules on your own.

Robot.txt file is just a group of set of rules. In each set, the webmaster has to tell which search engines the rule will apply to and then they have to define the rule itself. The below text snippet is a perfect example of this.

User-agent: *

Disallow: /images/

Disallow: /js/

Here the first line is used to address the search engine(s). ‘*’ means all search engines.

The second and lines tell the search engines to not index the “images” and “Js” directory of the respective website.

Also Read: How to Buy Domain Name for Website using Dynadot

Note: Search engine bots are case sensitive. Be sure to enter the syntax and folder names properly.

Some examples of Robot.txt files:

Example #1:

User-agent: *


(A blank Disallow command means search engines can index all files and folders of the respective website)

Example #2:

User-agent: *

Disallow: /css/style.css

(This rule is telling all search engines to not index a particular file in on the server)

Example #3:

User-agent: *

Disallow: /css/

(Here the rule is telling search engines to not index the whole “css” folder)

Example #4:

User-agent: *

Disallow: /

(Here the rule is telling search engines to not index the whole website)

Example 4 (grouping different rules together)

User-agent: Opera 9

Disallow: /css/

User agent: *


(Here the Robots.txt file is telling Opera 9 bot to not index the ‘css’ folder. At the same time the second rule in the same file allows all other search engines to index the whole website)

What To Allow For Indexing:

  • All images
  • All javascript files
  • All CSS files
  • All HTML files
  • Anything else that is linked or embedded to your website

What Not To Allow For Indexing:

  • Personal files that you do not want to be displayed in search results
  • Admin folder or admin pages

Hint: In Robots.txt, it is not required to actively mention the files you want to get indexed. Instead, it is a file that just tells the search engines what not to index.

You May Also Like :

Free SSL Certificate for a lifetime with Free Hosting

Expand Your Career Opportunities With Top Udemy Courses At Just $5

Power Words That Will Boost Your CTR, Drive Unlimited Organic Traffic

WordPress Speed Optimization by 200% using .htaccess {No Plugin 2022}

On "A Definitive Guide To Create Robots.txt"

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    Thanks for submitting your comment!