Skip to content

Why is re-acquiring with delay expected to be successful? #185

@a1d4r

Description

@a1d4r

Hi, I tried to use this library to avoid exceeding rate limits to an external API. I tested the rate limiter locally on one user and everything worked like a charm. But when I deployed the rate limiter in production on 100 users, I started catching exceptions in Sentry:

Re-acquiring with delay expected to be successful, if it failed then either clock or bucket is probably unstable

and

  File "/opt/pysetup/.venv/lib/python3.12/site-packages/pyrate_limiter/limiter.py", line 322, in _handle_async_result
    result = await result
                   └ <coroutine object Limiter.handle_bucket_put.<locals>._put_async at 0xed5962020930>
  File "/opt/pysetup/.venv/lib/python3.12/site-packages/pyrate_limiter/limiter.py", line 257, in _put_async
    result = await result
                   └ <coroutine object Limiter.delay_or_raise.<locals>._handle_async at 0xed59621c8540>
  File "/opt/pysetup/.venv/lib/python3.12/site-packages/pyrate_limiter/limiter.py", line 180, in _handle_async
    delay = await delay
                  └ <coroutine object AbstractBucket.waiting.<locals>._calc_waiting_async at 0xed5962096500>
  File "/opt/pysetup/.venv/lib/python3.12/site-packages/pyrate_limiter/abstracts/bucket.py", line 99, in _calc_waiting_async
    return _calc_waiting(bound_item)
           │             └ <pyrate_limiter.abstracts.rate.RateItem object at 0xed596209ddc0>
           └ <function AbstractBucket.waiting.<locals>._calc_waiting at 0xed59621eb6a0>
  File "/opt/pysetup/.venv/lib/python3.12/site-packages/pyrate_limiter/abstracts/bucket.py", line 83, in _calc_waiting
    assert self.failing_rate is not None  # NOTE: silence mypy
           │    └ None
           └ <pyrate_limiter.buckets.redis_bucket.RedisBucket object at 0xed5962fd87a0>

AssertionError: assert self.failing_rate is not None  # NOTE: silence mypy

I don't quite get why re-acquiring has to be successful. If there are multiple concurrent workers sending requests to API, many of them might exhaust rate limits and go to asyncio.sleep. After the delay, the worker might exceed the limit again, if other workers made requests. I was thinking about implementing the queue with exactly one consumer which will send API requests, but in this case I need the response back. Implementing RPCs is not quite trivial and requires message broker.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions