How to ADD a crawler agent?

I wanted to begin using agencyanalytics.com to run some SEO checks on my client’s sites.

As it turns out, their crawlers aren’t getting through to the pages.

My robots.txt includes:
User-agent: *
Disallow:

Their tech support told me to add “AA-Site-Audit-Crawler” as an exception. Their exact words:

“You have to white-list our crawler user agent on the server where that site resides. We use rotating IPs, so you’ll need to white-list by name. The name to use is AA-Site-Audit-Crawler. In this case you have to enable it since it’s being blocked by your server.”

The site is www.fantasyfootballmetrics.com

Is there a way to resolve this on my end?

Thanks in advance for the help!

Go to > Project Settings → SEO

The indexing setting can be modified there.

Thanks for the help!

Would I then add:

User-agent: AA-Site-Audit-Crawler
Disallow:

I tried adding both but that didn’t seem to work, so it looked like:

User-agent: *
Disallow:

User-agent: AA-Site-Audit-Crawler
Disallow:

Do you want to let all bots in or just this referenced one?

I would like all bots for now.

That is correct then.

Ok that’s what I thought…because that is what it was originally set to. And still their crawlers aren’t indexing so I thought to maybe add them as an allowed agent. Is that redundant then? How do I white-list a user agent by name?

If the first line exists you don’t need to.

You would only whitelist when you have rules blocking everything then allowing from certain bots.

http://www.robotstxt.org/robotstxt.html

Gotcha! Thanks for the help!

I ended up removing the syntax completely which is the same thing as disallow I believe, then I realized that the crawler was crawling a redirected URL smh so it’s all working now!