Streaming live at 10am (PST)

API Rate Limit inconsistency

It wouldn’t be good form for me to leave that sort of a post on Webflow’s own public forum :slight_smile:
But feel free to message me if you have a specific question that I might be able to help you with.

It’s not just developers. Full stack design hackers like me using zapier and integromat are hitting big walls. Just this week - can’t create an ecommerce product with IPAAS tools, can’t upload a file creating an item (webflow module is crashing integromat dev tools extension.

1 Like

Yep I see your problem. It’s actually quite strange - most development platforms encourage integration and try to make life easy. Webflow seems to be doing the opposite? Maybe there is a business reason for it, I’m not sure. Or maybe this is just way down their priority list. It would be good to hear something from them but I certainly won’t be holding my breath.

1 Like

I’ve had a few mails back and forth with Webflow support and I’m starting to get the feeling that they dont even know what’s going on with the API themselves. Or maybe they do, but don’t know how to fix it…

Here’s the latest Webflow support answer, directing me to a Redis throttle implementation issue (on the API end). If you can make any sense of this, please let me know… :slight_smile: :see_no_evil:

This GitHub post is public: so it is not private to just us.

Basically it tells us that since you are only performing a call once every minute, you are staying within the same polling bucket and therefore the bucket is not resetting. What you could do is switch your polling frequency to less than 10 seconds instead of 5 minutes that the rate limiter will most likely fix itself. Within that GH issue is a link to what someone else did to fix this rate limit issue.

Now, this is a one-off case for your issue and likely that it is specific to the polling frequency that you are using. It does appear that some adjustment has to be done on your end within the code you are using.

So “the fix” is to change my polling strategy to prevent falling into the same bucket on every request. So if you need to do a single request every minute (e.g. polling CMS data) you must accept a 28-minute unavailability period after every 60 cycles - meaning 7 hours of downtime every day (as I described and documented earlier in this thread).

I will continue my tests…

In answer to your original question about a solution, the best way I have found is to use a backoff and retry mechanism with a retry that persists for a long time.

I have used GCP pubsub to make this work. This is a (very simplified) overview of what I set up and the order of events that take place:

  1. My own database has a new entry.
  2. This triggers a webhook to a cloud function.
  3. The cloud function creates a pubsub message that a new entry has been created.
  4. Another cloud function listens to messages to this topic
  5. On receipt of a new message, this cloud function updates Webflow accordingly.
  6. If the rate limit is hit, the process fails.
  7. If the process fails, it continues to retry until it succeeds for up to 1 week (this is a cloud function setting you can enable in GCP).

Not the most elegant solution, but it does work pretty well. You could end up in a situation where a newer update gets through before an older one that is being retried, so I probably should write something to handle that edge case when I get time…

Thanks for a description of your workaround, @jasondark

I got this from support last night:

I wanted to start by saying thanks for the constructive and thoughtful feedback. I’ve shared this with our team.

I also want to say thanks for reporting this behavior. This is an issue that was unknown to us, and you’re reporting it will help improve our API not only for yourself but for many other users as well.

The team is planning on rolling out a fix for this, but I don’t have a time frame for when that might be available. In the meantime, the only workaround when using the API is to use the methods mentioned by our engineers in the previous message.

As soon as we have an update on a solution, I’ll circle back with you and make sure you are notified.

Based on your experience with Webflow support an actual fix of this issue is not going to happen any time soon, so I’m stuck with trying to make a workaround for our setup. GCP pubsub is not an option in our current setup, so I will have to come up with another way to handle the rate limit.

I have tried different request patterns to find the “sweet spot”, but the remaining count has been very inconsistent up until now (going from 1 to currently 5 requests per minute).

Similar to what @DukeDiamond suggested, I’m also trying to perform N<60 requests on a 5-minute cycle, and once N requests have been submitted to the API “flood” the API with the remaining count to hopefully reset the polling bucket.

I will keep you posted on my progress…

While we are at it. The get items limit of 100 max is riDiculArse!


Interesting to hear that they had no idea about the rate limit issues that have been getting raised on these forums and the wishlist for years :joy:

Another option you could look at is building a queuing mechanism for your requests. Then you can try to execute your requests in sequential order. If one fails, you pause your queue until the first one succeeds, and so on so forth. I may have to move to something like this myself if I get into issues with GCP pubsub’s asynchronous nature meaning that a failed request may not get through before newer ones do.


I am sure glad I found this thread after seeing these issues I was baffled.

I’m using to limit my queries with mixed results on different endpoints.

The main issues are with using the GET /collections/:collection_id/items

The docs say the limit is 60 requests per minute and these headers that come back.

X-RateLimit-Limit & X-RateLimit-Remaining

On my requests I am seeing X-RateLimit-Limit is 120 not 60

But when it gets anywhere near 50-60 without a hard 1.2 second bottleneck it fails even though its no where near the 120 limit.

You cannot even rely on X-RateLimit-Remaining as its not accurate. It will say there are 59 left and then throw a 429.

Even if I wait an hour and try again without any activity is starts with X-RateLimit-Remaining at low numbers like 45.

What is really weird is I do not have this issue with the PUT /collections/:collection_id/items/:item_id

This PUT endpoint handles just fine. I can index 6000 items to Webflow smoothly at 120 per minute and never have any failures. X-RateLimit-Remaining gets between 5-15 but never fails.

The same cannot be said for the GET items endpoint.

This inconsistency between the GET and PUT requests seems odd.


I experience the same problems as described above. The API rate limits are very obviously and clearly broken. The inconsistencies make it nearly impossible to use the API in any serious integration.

I would be happy to hear a response from Webflow support here.


Hi @andyjames can you tell me what plan are you on?

The proper approach seems to be like @jasondark suggested on April 9.

If you, like me, feel that setting up a queue and auto-failover system is a bit too much, I have created this small script that you can plug into your existing code and handle any rate limit errors for you with very little change to your existing code.

The biggest disadvantage with this script is that it has the risk of timing out, if the webflow API is not providing a new pool of requests within the script timeout you have set up.

Hi @WebDev_Brandon the site is on a full hosting package. We are testing with maximum records of up to 10,000

I’ve spent much more time with this and now have a working solution. I am using bottleneck to handle the requests and sending lots of requests initially and then slowing it down after that using bottlenecks reservoir feature. This is to ensure that smaller batches are run faster. It includes a formula that works out the bottleneck settings based on the collection size so it remains performant at all collection sizes.

The simple solution is using an async await with forced buffer of 1 - 1.2 seconds in between each request.

I believe this is similar to the solution posted by @fbcto recently. I see await sleep(1000)

This time delay is needed for large collections but it is no good for people with smaller collections as it can run a lot faster taking advantage of 60 requests * 100 items a minute. (Weirdly the items api X-RateLimit-Remaining says it allows 120 but never gets close and looks more like its 60… see below)

One thing I have done is to use a recursive function that does one call to Webflow to get the remaining items. If it is below 90 it will wait until it is > 90 to run it again. This is working perfectly to get around issues that occur with the limits, especially when you get data from one collection and then another. The second collection will have a lower rate to start so this checking becomes important.

The issue I see based on hours of testing is that something really seems off with the count of the GET /collections/:collection_id/items and its returning value of X-RateLimit-Remaining It says 120 on a fresh start but I am not convinced.

The only way I can get this endpoint to work is to make sure I never drop below 60 remaining. If its 59 when the job finishes it will bomb! It’s like it’s counting 60 more than it should, or its not allowing for the 120 is says it is

One last thing. I want to re iterate. The above is all about the get items API. When I use the API to put data INTO Webflow I can run at 120 requests a minute and never have issues I can go all the way down to 1 item remaining. It seems strange to me that I do not experience the same issues with both API’s

1 Like

This weird and totally unneccessary API behavior is wreaking havoc on our site. We use the Wordpress plugin “Webflow Pages”, which apparently uses the API for fetching the static and dynamic pages. This has resulted in us hitting the rate limits over and over again, taking the entire site offline. Not only that, but the API key is decoupled from the plugin and we have to re-integrate the entire site manually again.

To make matters worse, we’re using the API to offload older articles from the webflow cms into our wordpress database. Webflow themselves told me that to search through our 1000s of articles we should run 50-70 queries with offsets just to list all articles - but that also hits the rate limit.

This is so frustrating, Webflow.

1 Like

I think the reason why we’re having issues with the GET method is due to a faulty redis caching mechanism on Webflow’s part (also in connection with support statements about Redis caching here: API Rate Limit inconsistency).

There’s no caching in place for the other methods so we’re not seeing issues here.

I took note :slight_smile:


Just an FYI, the WP Plugin does not connect to anything dynamic on Webflow. It is strictly for use with static pages.

I have also been informed that the Business Hosting plan does have a 120 rate limit, @andyjames you are correct there. We were not informed of this change until recently.

Have a Great Day and Happy Designing,
~ Brandon

Just dropping a useful link here:

This lib helped me deal with Webflow rate-limit pretty well. Very solid interface

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.