Googlebot blocked by robots.txt

Hi! My search console says Googlebot has been blocked by my robots.txt even though my txt file has allowed access to it. Does anyone have the same issue?
Could you help me?
File link: www.thegamingproject.co/robots.txt

1 Like

There is something wrong with your robots.txt.
If you want to make googlebot crawler all your website pages, you can update it as follows:

User-agent: *
Allow: /

Sitemap: https://www.thegamingproject.co/sitemap.xml

For example this website’s robots.txt - clipartkey: https://www.clipartkey.com/robots.txt

Keep in mind that if you want to get traffic from Google search engines, don’t block googlebot. Otherwise it’s hard to get Google traffic.

1 Like

Hi! Thanks for the reply!

So this is what my txt file says:
User-agent: *
Disallow:

Which basically means any bot is allowed to crawl. I checked with The Web Robots Pages & that’s exactly what is allowed. Everyother crawler on the net says the txt file grants access, but the live URL test on google says the bot is blocked.

You are wrong.
Disallow means “not be allowed”, any bots can’t access to you website.
Allow means “be allowed”, any bots can access to your website.
User-agent: * means “any bots”

The "Disallow: /" tells the robot that it should not visit any pages on the site.

Please check the article carefully.

Oh. All right. I’ve updated it to : www.thegamingproject.co/robots.txt

Is this all right?
Again thanks so much :slight_smile:

Well done. Hope your site gets Google likes..…

This was incorrect advice. The OP showed that his robots.txt contained

User-agent: *
Disallow:

Which seems to be the default for Webflow.

‘Disallow: ’ (with no slash) means ’ disallow nothing’, which is the equivalent to ‘Allow: /’ or ‘allow all’.