THE ROBOTS.TXT FILE
You know that search engines have been created to help people find information quickly on the Internet, and the search engines acquire much of their information through robots (also known as spiders or crawlers), that look for web pages for them.
The spiders or crawlers robots explore the web looking for and recording all kinds of information. They usually start with URL submitted by users, or from links they find on the web sites, the sitemap files or the top level of a site.
Once the robot accesses the home page then recursively accesses all pages linked from that page. But the robot can also check out all the pages that can find on a particular server.
After the robot finds a web page it works indexing the ti View the rest of this article
Saturday, November 10, 2007
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment