Streaming live at 10am (PST)

Robots.txt blocking Googlebot crawling pages

Yesterday Google suddenly stopped crawling my site. It said there’s a Robots.txt issue - yet every guide I’ve read seems to suggest its ok

I thought perhaps it’s the sitemap but that seems fine here:

When I try to crawl the sitemap on Google Search Console it says robots.txt is blocking it and it “couldn’t fetch”.

When I request indexing specific pages on search console it states:
“Indexing request rejected”

A robots.txt test at google states “robots.txt fetch failed. You have a robots.txt file that we are currently unable to fetch. In such cases we stop crawling your site until we get hold of a robots.txt, or fall back to the last known good robots.txt file.”

I’ve read numerous guides and tried other sitemaps with Yoast instead, for example, and nothing is changing. My google search referrals are on the floor and adsense income dropped hugely. All tests I do elsewhere show no issues.


Hi @MurkyDepths, thanks for your post.

I see some blocking directives in the robots.txt related to the wp-admin folders

It might needed to update the robots.txt to allow indexing and remove the Allow, the file would look like:

User-agent: *
Disallow: /wp-admin

This allows crawling everything (so there is no need for Allow) except URLs like /wp-admin

I hope this helps

Many thanks I’ve now changed it. Hopefully google stops giving me errors now.

Hi @MurkyDepths, I hope so too. Sometimes the changes to Robots.txt take a little while to update in Google, but it you are patient it should clear up.