new google ip range
02 Apr/26

New Location for the Google Crawlers’ IP Range Files

Google moved its IP range JSON files on March 31, 2026. Your scripts may break if you don’t update them soon.

Google made an announcement on March 31, 2026. It is a short but important one. The IP range JSON files for Google’s crawlers have moved to a new location. Therefore, if your server, firewall, or scripts use the old path, you need to update them.

Moreover, the old path will not stay open forever. Google will phase it out within 6 months. In other words, act now – not later.

What Did Google Announce on March 31, 2026?

Google Search Central posted this update on its official blog. Gary from Google wrote it. Here is the key message:


“Since these ranges apply to more than just Google Search crawlers, we’re moving them to a more general location. We will eventually phase out the old locations and redirect them to the new ones within 6 months.” — Gary, Google Search Central

So the files are not deleted. They just moved. However, the old URLs will stop working in about 6 months. Therefore, update your systems as soon as you can.

OLD PATH – developers.google.com/search/apis/ipranges/

NEW PATH – developers.google.com/crawling/ipranges/

Both paths work right now. However, only the new path will work after 6 months. In addition, Google has already updated its own documentation to use the new path.

Why Did Google Move the IP Range Files?

The old path was under the /search/ directory. That made sense when these files were only for Google Search crawlers. However, that is no longer the case.

In fact, many Google services use these IP ranges now. This includes Google Shopping, Google News, Gemini, AdSense, and more. Therefore, a /search/ path no longer fits.

Moreover, Google moved its crawling documentation to a dedicated Crawling Infrastructure site back in November 2025. As a result, this IP range file move completes that shift. It is now all in one consistent place – under /crawling/.

This is not a change to the IP addresses themselves. The actual IP ranges stay the same. Only the location of the files has changed. Therefore, your website crawling will not be affected — only your scripts that fetch these files may break.

What Are the 4 IP Range JSON Files?

Google publishes four separate JSON files. Each one covers a different type of crawler. Here is what each file contains:

  • googlebot.json: Common crawlers – includes Googlebot for web and images
  • special-crawlers.json: Special crawlers — includes AdsBot and other Google services
  • user-triggered-fetchers.json:Fetchers triggered by a user action — like Google Cache
  • user-triggered-agents.json:Agents like Google-Agent — used by Project Mariner

All four files are now at the new /crawling/ipranges/ path. Therefore, update all four references in your systems – not just one.

How Does This Affect Your Website?

For most website owners, this change has no direct SEO impact. Google’s crawlers will keep visiting your site as normal. However, there are some specific cases where this matters.

Who Is Directly Affected?

  • Developers with scripts that fetch the old /search/apis/ipranges/ path
  • Firewall rules that pull IP allowlists from the old URL automatically
  • Security tools or WAF configurations that reference the old path
  • CDN setups (like Cloudflare) that use the files to whitelist Googlebot
  • Custom monitoring dashboards that track Google crawler IPs

Who Is NOT Directly Affected?

  • Regular website owners who do not use custom firewall scripts
  • WordPress sites on managed hosting — no action needed
  • Sites that verify Googlebot through reverse DNS — that method stays the same
  • Anyone who does not manually fetch the IP range JSON files

Important: Google switched IP range updates from weekly to daily in March 2025. So these files change every day now. Therefore, if your system fetches the file automatically, update the URL path immediately.

What Is the Timeline for the Old Path to Be Removed?

Google gave a clear 6-month window. Here is what the timeline looks like:

Right Now

Both old and new paths work. No disruption yet. Use this time to update.

Within 6 Months

Old path will redirect to the new one. Scripts that don’t follow redirects may break.

After 6 Months

Old path is fully removed. Systems still using it will fail and get no data.

Therefore, you have time. However, do not wait until the last minute. Updating one URL path takes less than 5 minutes. Moreover, the risk of forgetting is real.

What Should You Do Right Now?

Here are the steps to take today. Follow them in order for the best result:

Check If You Use the Old URL Path

Search your codebase for /search/apis/ipranges/. Check your firewall rules and server configs too. If you find it, update it to /crawling/ipranges/ now.

Update All Four JSON File References

Do not update just one file. All four files have moved. Therefore, find and replace all four references in your system at the same time.

Test the New Path Works

Visit developers.google.com/crawling/ipranges/googlebot.json in your browser. Confirm the JSON file loads correctly before deploying your changes.

Tell Your Hosting or CDN Provider

If you use a managed CDN like Cloudflare or Fastly, check their settings. Ask if they auto-fetch Google’s IP ranges. If so, ask them to update the path too.

Set a Reminder for the 6-Month Deadline

Even if you update now, set a calendar reminder. Aim to verify everything is working by August 2026. That gives you a buffer before Google removes the old path.

Why Does Googlebot’s IP Range Matter for SEO?

Most SEOs never think about IP ranges. However, they matter more than you might expect. Here is why.

First, some servers block unknown IPs by default. If Googlebot’s IP is blocked, it cannot crawl your site. As a result, your pages do not get indexed. Moreover, rankings can drop without you realising the cause.

In addition, security tools sometimes flag Googlebot as a bot threat. This is wrong. Therefore, keeping your Googlebot whitelist up to date prevents accidental crawl blocks.

According to Google’s official verification guide, the most reliable way to confirm a real Googlebot visit is through reverse DNS. The domain must end in googlebot.com or google.com. IP matching is the secondary method. Use both together for best security.

What Else Changed with Google’s Crawling Infrastructure Recently?

This IP file move is part of a bigger shift. Google has been reorganising its crawling documentation for months. Here is a quick recap of related changes:

  • March 2025 — Google switched IP range updates from weekly to daily
  • November 2025 — Google moved crawling docs to a new dedicated site
  • February 2026 — Google reduced Googlebot’s fetch limit from 15MB to 2MB
  • March 20, 2026 — Google updated its crawler verification documentation
  • March 31, 2026 — Google moved IP range files to the new /crawling/ipranges/ path

Therefore, the March 31 move is not an isolated change. It is part of Google treating its crawlers as a shared infrastructure — not just a Search product. As a result, more services now fall under the same crawling framework.

SEO · Web Design · Coimbatore Marketing for Tamil Nadu businesses. Results-first, no jargon.

Coimbatore
Tamilnadu, India
Call Us: +918760254172
24*7
WhatsApp
Quick Response