Skip to main content

Rate Limit Tiers

Rate limits are enforced across three time windows — per minute, per hour, and per day — to prevent abuse while giving legitimate integrations plenty of headroom.

General API Endpoints

TierLifetime CapPer MinutePer HourPer Day
Free10 total requests51010
PaidUnlimited601,00010,000

Feed Endpoints

Feed endpoints (POST /api/jobs/feed and GET /api/jobs/expired) have separate, higher limits since they are designed for bulk data sync:
TierPer MinutePer HourPer Day
FreeN/AN/AN/A
Paid1205,00050,000
Feed endpoints are subscription-only. Free-tier keys cannot access them. Feed rate limits are tracked independently from general API limits — consuming your search quota does not affect feed throughput.

How Rate Limits Work

  • Rate limits are applied per API key — each key has its own independent counters
  • Limits are checked before credits are deducted, so you are never charged for a rate-limited request
  • If you exceed a rate limit, you’ll receive HTTP 429 Too Many Requests
  • The most restrictive window applies: if you hit the per-minute cap, you’re blocked even if your hourly and daily budgets have room
  • Limits reset on a rolling window basis, not at fixed clock boundaries

Rate Limit Response

{
  "status": 429,
  "error": "Too Many Requests",
  "message": "Rate limit exceeded. Try again in 60 seconds.",
  "retry_after": 60
}

Response Headers

Every API response includes rate limit headers so you can proactively manage your request pacing:
HeaderDescription
X-RateLimit-LimitMaximum requests allowed in the current window
X-RateLimit-RemainingRequests remaining in the current window
X-RateLimit-ResetUnix timestamp (seconds) when the current window resets
Retry-AfterSeconds to wait before retrying (only present on 429 responses)

Example Headers

HTTP/1.1 200 OK
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 847
X-RateLimit-Reset: 1709510400
Content-Type: application/json
HTTP/1.1 429 Too Many Requests
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1709510400
Retry-After: 42
Content-Type: application/json

Retry Strategies

1

Read the Retry-After header

When you receive a 429, always use the Retry-After header value. This tells you the exact number of seconds to wait. Never hardcode retry delays.
2

Implement exponential backoff with jitter

For retries, use exponential backoff (e.g., 1s → 2s → 4s → 8s) with random jitter (±25%) to avoid thundering-herd problems when multiple clients retry simultaneously.
3

Set a max retry count

Cap retries at 3–5 attempts. If you’re still hitting 429 after multiple retries, your request rate is fundamentally too high — slow down the overall pipeline rather than hammering the API.
4

Track remaining quota proactively

Read X-RateLimit-Remaining on every response. When it drops below 10–20% of X-RateLimit-Limit, proactively throttle your request rate instead of waiting for a 429.
5

Cache responses aggressively

Cache search results and geocode lookups locally. Geocode results in particular are highly cacheable since location strings map deterministically to structured output.
6

Use the Feed API for bulk data

If you need to sync large volumes of jobs, use POST /api/jobs/feed instead of paginating through search results. The Feed endpoint is optimized for high throughput with separate, generous rate limits.

Example: Python Retry with Backoff

import time
import random
import requests

def api_request_with_retry(url, headers, max_retries=4):
    for attempt in range(max_retries + 1):
        response = requests.get(url, headers=headers)

        if response.status_code != 429:
            return response

        retry_after = int(response.headers.get("Retry-After", 60))
        jitter = retry_after * random.uniform(0.75, 1.25)
        backoff = min(jitter * (2 ** attempt), 300)  # Cap at 5 minutes

        print(f"Rate limited. Retrying in {backoff:.1f}s (attempt {attempt + 1}/{max_retries})")
        time.sleep(backoff)

    raise Exception("Max retries exceeded")