Are Rate Limit Headers Broken?

Every request I send returns the same rate limit headers regardless how fast I send requests:

{
  'ratelimit-limit': '800',
  'ratelimit-remaining': '799',
  'ratelimit-reset': <current epoch>
}

I’m able to hit the rate limit, however the ratelimit-remaining header never changes from 799 and the ratelimit-reset header is always the current epoch time. Do these headers still work?

It’s work as expected, you are just not “hammering” enough to cause a point depletion

The rate limit is constantly being refilled, so with the default rate limit of 800/60s, it’s refilling a rate of roughly 1 every 75ms.

This means that if you’re making requests slower than every 75ms, the bucket is essentially filling back up moments after you make the request. If you make requests faster than every 75ms the bucket will slowly be used up faster than it is being refilled and that 799 will progressively go down (how fast it goes down depends on how much faster than once every 75ms you’re making requests).

Once you reach 0 rate limit remaining you’ll still continue to refill the bucket so you can make 1 request every 75ms and all other will return a 429 ratelimit exceeded error. If you were to slow down and make requests slower than once every 75ms, the rate limit remaining will start to go back up (which again depends on how much slower than that 75ms you’re making requests).

1 Like

It seems @BarryCarlyon and @Dist missed this or misinterpreted it. If you are able to hit a limit, that means you are being limited. If that is the case but the headers do not reflect that—as evidenced by points remaining and the reset at the current time—then it is absolutely not working as expected and is a bug.

1 Like

Hmm yes,

So op,

What endpoint(s) are you calling? What are you doing to manifest a 429/rate limit hit with healthy helix points left?

You can get 429’s from the ban/unban endpoints (or the channel add/remove words for example) and still have a healthy point helix count left. (And a few others but this is off the top of my head)

Same of the create clip endpoint which has two rate limits.

So what endpoint(s) are you calling and whats the response code(s) if you are hitting the limit?

get banned users endpoint

I’m guessing this is one that has another hidden rate limit?

I see ok.

The docs imply something else

When your app calls the endpoint, the endpoint’s points value is subtracted from the remaining points in your bucket. If your bucket runs out of points within 1 minute, the request returns status code 429.

From:

I interpret this as the bucket resetting every minute rather than the leaky bucket algorithm you mention. Perhaps the docs can be revised to avoid the confusion?

Also should note that the response body is:

{
  error: 'Too Many Requests',
  status: 429,
  message: 'The app has exceeded the number of requests it may make per minute for this broadcaster.'
}

Again, the message implies the 1 minute bucket reset rather than leaky bucket as mentioned above.

How many requests are you making and how frequently? Also is it just that endpoint or any endpoint you’re experiencing 429’s with?

I can’t reproduce the issue you’re experiencing, my rate limit is acting as intended (which is how I previously stated, and if I use one of my client id’s with a default rate limit then it stays at 799 if I making requests slower than every 75ms, if I make them faster it goes down). I also don’t experience any issue with that endpoint in particular, it returns a 429 only when my bucket is empty, but I do know there have been issues in the past with that endpoint so I’ll see if they may have resurfaced.

1 Like

I created a separate test project, sending ~1 request/10ms, I do now see the ratelimit-remaining header decreasing like you outlined before, however I hit a rate limit at around 10 requests, with ~790 points still in my bucket, the response body being what I sent above.

Do you know if this has another hidden rate limit like @BarryCarlyon mentioned?

My test script doing 100 per page for a second caster until fetched all bans completed normally, fetched all 24k bans. So 241 pages all collected in just under a minute no decrease in helix rate limit since sequential page loading

And did a little spamming on a 121 lookup and it didn’t want to trip for me either

So you attemping to get a channels entire list or doing many 121 lookups?
Given you can also do 100 user_ids’ per call.

What is your use case to be tripping the limit, I don’t think this endpoint has a hidden limit. (but who knows they might have added one due to the system being under load for whatever reason) But there could be some other factor at play

I don’t see an example body/call for what you sent only responses

The requests made in the original project were sequential and paginated with 100-length pages as well. Rate limit was hit at around 47k users, so ~470 requests.

In both bucket algorithm cases, the ~470 requests still should not have emptied the bucket.

My test project was mostly just to confirm what @Dist said about how the rate limit actually works. Since I was testing this on my own account with only around 100 banned users, it was the same request of page size 100 sent on a 10ms interval with no cursor.

In both cases, requests were made on their own client IDs, with no other helix requests being sent on it.

I changed my test project to now make the requests sequential with no cursor and I did not hit any limit with a test of 2500 requests which spanned ~5mins, now I’m really not sure why I hit a limit originally. But if I change the page size in my test project to 1, I hit a rate limit at ~700 requests, with 799 points in my bucket.

thanks for looking into it :slight_smile:

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.