Google uses Robots usually called as google bots/google spider to fetch data in the webpage. Google bot first checks the robots.txt file and then process the data. Google bot moves one link to other link through the sitemap.xml file. Google bot index the website based on site priority.
The unwanted pages are blocked from google crawling by adding "disallow" command line in robots.txt file.
Microsoft uses bing bot and yahoo bots for crawling purpose. Bing is powered by yahoo.
In Webmaster Dashboard the "FETCH as GOOGLE" option will helps to increase the site indexing status by submitting links.
![]() |
Fetch with Google Crawler |
The unwanted pages are blocked from google crawling by adding "disallow" command line in robots.txt file.
Microsoft uses bing bot and yahoo bots for crawling purpose. Bing is powered by yahoo.
In Webmaster Dashboard the "FETCH as GOOGLE" option will helps to increase the site indexing status by submitting links.