Rate limiting

Preventing abuse and managing availability

We apply rate limits to all API calls, to prevent abuse and manage the availability of the platform.

Our default rate limit across the whole API is 100 requests per second. We may occasionally reduce the rate limit as part of incident response to protect our other systems, and we may also permanently lower it for specific APIs which handle a lot of data (this will be documented on the endpoint if so).

This means that your software should be prepared to receive this response, and handle it appropriately, even if you don’t currently anticipate making this many requests.

Rate limit response

If you exceed the limit, you'll get a special HTTP response, like this:

HTTP/1.1 429 Too Many Requests  
Retry-After: Tue, 23 May 2023 14:42:01 GMT  
Content-Type: application/json

  "status_code": 429,  
  "type": "rate_limit_error",  
  "code": "rate_limit_exceeded",  
  "message": "Rate limit exceeded, please try again later"

Rate limit responses always have the HTTP status code 429. They also always include a Retry-After header, which is a date at which the limit resets (usually the following second), and they may also include additional debugging information in the JSON body.

Handling rate limit responses

A rate limited response means that we have not processed the request, so it can be safely retried after the limit has reset. Typically this will be in the next clock second. Most programming languages offer a sleep command or equivalent that allows waiting for the reset time to elapse.

Alternatively, you might want to put the request into a background queue to be processed in a different execution thread later.