SEO News

Google Introduces New Crawler To Optimize Googlebot’s Performance


Google has recently introduced a new web crawler called “GoogleOther,” designed to alleviate strain on Googlebot, its primary search index crawler.

The addition of this new crawler will ultimately help Google optimize and streamline its crawling operations.

Web crawlers, also known as robots or spiders, automatically discover and scan websites.

Googlebot is responsible for building the index for Google Search.

GoogleOther is a generic web crawler that will be used by various product teams within Google to fetch publicly accessible content from websites.

In a LinkedIn post, Google Search Analyst Gary Illyes shares more details.

Dividing Responsibilities Between Googlebot & GoogleOther

The main purpose of the new GoogleOther crawler is to take over the non-essential tasks currently performed by Googlebot.

By doing so, Googlebot can now focus solely on building the search index utilized by Google Search.

Meanwhile, GoogleOther will handle other jobs, such as research and development (R&D) crawls, which are not directly related to search indexing.

Illyes states on LinkedIn:

“We added a new crawler, GoogleOther to our list of crawlers that ultimately will take some strain off of Googlebot. This is a no-op change for you, but it’s interesting nonetheless I reckon.

As we optimize how and what Googlebot crawls, one thing we wanted to ensure is that Googlebot’s crawl jobs are only used internally for building the index that’s used by Search. For this we added a new crawler, GoogleOther, that will replace some of Googlebot’s other jobs like R&D crawls to free up some crawl capacity for Googlebot.”

GoogleOther Inherits Googlebot’s Infrastructure

GoogleOther shares the same infrastructure as Googlebot, meaning it possesses the same limitations and features, including host load limitations, robots.txt (albeit with a different user-agent token), HTTP protocol version, and fetch size.

Essentially, GoogleOther is Googlebot operating under a different name.

Implications For SEOs & Site Owners

The introduction of GoogleOther should not significantly impact websites, as it operates using the same infrastructure and limitations as Googlebot.

Nonetheless, it’s a noteworthy development in Google’s ongoing efforts to optimize and streamline its web crawling processes.

If you’re concerned about GoogleOther, you can monitor it in the following ways:

  • Analyze server logs: Regularly review server logs to identify requests made by GoogleOther. This will help you understand how often it crawls your website and which pages it visits.
  • Update robots.txt: Ensure your robots.txt file is updated to include specific rules for GoogleOther if necessary. This will help you control its access and crawling behavior on your website.
  • Monitor crawl stats in Google Search Console: Keep an eye on crawl stats within Google Search Console to observe any changes in crawl frequency, crawl budget, or the number of indexed pages since the introduction of GoogleOther.
  • Track website performance: Regularly monitor your website’s performance metrics, such as load times, bounce rates, and user engagement, to identify any potential correlations with GoogleOther’s crawling activities. This will help you detect if the new crawler is causing any unforeseen issues on your website.

Source: Google

Featured Image: BestForBest/Shutterstock





Source link : Searchenginejournal.com

Related Articles

Back to top button
Social media & sharing icons powered by UltimatelySocial
error

Enjoy Our Website? Please share :) Thank you!