Fetch with Google Crawler

Google uses Robots usually called as google bots/google spider to fetch data in the webpage. Google bot first checks the robots.txt file and then process the data. Google bot moves one link to other link through the sitemap.xml file. Google bot index the website based on site priority.
Fetch with Google Crawler
Fetch with Google Crawler

The unwanted pages are blocked from google crawling by adding "disallow" command line in robots.txt file.

Microsoft uses bing bot and yahoo bots for crawling purpose. Bing is powered by yahoo.

In Webmaster Dashboard the "FETCH as GOOGLE" option will helps to increase the site indexing status by submitting links.

SHARE THIS
Previous Post
Next Post