Map<string, number>
as the ephemeralCache
option:
ephemeralCache
will be initialized with new Map()
if no value is provided
as the ephemeralCache
parameter. To disable the cache, one must pass ephemeralCache: false
.
If enabled, the ratelimiter will keep track of the blocked identifiers and their
reset timestamps. When a request is received with some identifier ip1
before the reset time of
ip1
, the request will be denied without having to call Redis.
In serverless environments this is only possible if you create the cache or ratelimiter
instance outside of your handler function. While the function is still hot, the
ratelimiter can block requests without having to request data from Redis, thus
saving time and money.
See the section on how caching impacts the cost in the costs page.
timeout
is provided.
analytics
parameter to true
:
ratelimit.limit()
, analytics will be sent to the Redis database
(see costs page)
and information about the hour, identifier and the number of rate limit success and
failiures will be collected. This information can be viewed from the Upstash console.
If you are using rate limiting in Cloudflare Workers, Vercel Edge or a similar environment,
you need to make sure that the analytics request is delivered correctly to the Redis.
Otherwise, you may observe lower numbers than the actual number of calls.
To make sure that the request completes, you can use the pending
field returned by
the limit
method. See the
Asynchronous synchronization between databases
section to see how pending
can be used.
limit
, it subtracts 1 from the number of calls/tokens available in
the timeframe by default. But there are use cases where we may want to subtract different
numbers depending on the request.
Consider a case where we receive some input from the user either alone or in batches.
If we want to rate limit based on the number of inputs the user can send, we need a way of
specifying what value to subtract.
This is possible thanks to the rate
parameter. Simply call the limit
method like the
following:
batchSize
instead of 1.
MultiRegionRatelimit
which replicates the state across multiple redis
databases as well as offering lower latencies to more of your users.
MultiRegionRatelimit
does this by checking the current limit in the closest db
and returning immediately. Only afterwards will the state be asynchronously
replicated to the other databases leveraging
CRDTs. Due
to the nature of distributed systems, there is no way to guarantee the set
ratelimit is not exceeded by a small margin. This is the tradeoff for reduced
global latency.
waitUntil
documentation in Cloudflare and Vercel for more details.