All platforms

Recruitee Jobs API.

Collaborative hiring platform with a public JSON API that returns complete job details including descriptions and salary in a single request.

Recruitee
Live
50K+jobs indexed monthly
<3haverage discovery time
1hrefresh interval
Companies using Recruitee
1X Technologies AS2am.tech433
Developer tools

Try the API.

Test Jobs, Feed, and Auto-Apply endpoints against https://connect.jobo.world with live request/response examples, then copy ready-to-use curl commands.

What's in every response.

Data fields, real-world applications, and the companies already running on Recruitee.

Data fields
  • Public JSON API
  • Full descriptions in one call
  • Structured salary data
  • Multi-language support
  • Remote/hybrid flags
  • Custom domain support
Use cases
  1. 01Job board aggregation
  2. 02Salary data extraction
  3. 03Multi-company monitoring
  4. 04Startup job tracking
Trusted by
1X Technologies AS2am.tech433
DIY GUIDE

How to scrape Recruitee.

Step-by-step guide to extracting jobs from Recruitee-powered career pages—endpoints, authentication, and working code.

RESTbeginner~120 requests/minute (unofficial)No auth

Discover the API endpoint

Recruitee uses company-specific subdomains or custom domains. The API endpoint is always at /api/offers. For standard Recruitee subdomains, the pattern is {company}.recruitee.com/api/offers.

Step 1: Discover the API endpoint
import requests

# Standard Recruitee subdomain
company_slug = "1x"
api_url = f"https://{company_slug}.recruitee.com/api/offers"

# Custom domain (e.g., careers.company.com)
custom_domain_url = "https://careers.company.com/api/offers"

response = requests.get(api_url, timeout=10)
response.raise_for_status()
data = response.json()

print(f"Found {len(data.get('offers', []))} jobs")

Parse job listings from the response

The API returns all jobs in a single 'offers' array. Each offer includes complete job details with title, description, requirements, location, salary, and employment type - no need to fetch individual job pages.

Step 2: Parse job listings from the response
for offer in data.get("offers", []):
    # Extract English translation (default to first available)
    translations = offer.get("translations", {})
    en_translation = translations.get("en", next(iter(translations.values()), {}))

    job = {
        "id": offer["id"],
        "title": offer["title"],
        "slug": offer["slug"],
        "company": offer.get("company_name"),
        "location": offer.get("location"),
        "remote": offer.get("remote", False),
        "on_site": offer.get("on_site", True),
        "employment_type": offer.get("employment_type_code"),
        "category": offer.get("category_code"),
        "description": en_translation.get("description", ""),
        "requirements": en_translation.get("requirements", ""),
        "url": offer.get("careers_url"),
        "apply_url": offer.get("careers_apply_url"),
        "created_at": offer.get("created_at"),
    }
    print(f"{job['title']} - {job['location']}")

Extract salary information

Recruitee provides structured salary data in a dedicated 'salary' field with min, max, currency, and period. Always check this field first before parsing salary from HTML descriptions.

Step 3: Extract salary information
def parse_salary(offer: dict) -> dict | None:
    """Extract structured salary from Recruitee offer."""
    salary_data = offer.get("salary")
    if not salary_data:
        return None

    return {
        "min": float(salary_data.get("min", 0)),
        "max": float(salary_data.get("max", 0)),
        "currency": salary_data.get("currency", "USD"),
        "period": salary_data.get("period", "year"),
    }

# Usage
for offer in data.get("offers", []):
    salary = parse_salary(offer)
    if salary:
        print(f"{offer['title']}: {salary['currency']} {salary['min']:,} - {salary['max']:,} per {salary['period']}")

Handle custom domains and validate endpoints

Some companies use custom domains instead of recruitee.com subdomains. You can detect Recruitee by checking for the /api/offers endpoint. Always validate responses to handle non-existent companies gracefully.

Step 4: Handle custom domains and validate endpoints
import requests
from urllib.parse import urlparse

def is_recruitee_domain(domain: str) -> bool:
    """Check if a domain is running Recruitee."""
    try:
        # Check for Recruitee subdomain
        if domain.endswith(".recruitee.com"):
            return True

        # Check for custom domain with /api/offers endpoint
        api_url = f"https://{domain}/api/offers"
        response = requests.get(api_url, timeout=10)

        # Valid Recruitee API returns JSON with 'offers' key
        if response.status_code == 200:
            data = response.json()
            return "offers" in data
    except Exception:
        pass
    return False

def get_api_url(input_url: str) -> str | None:
    """Convert any Recruitee URL to the API endpoint."""
    parsed = urlparse(input_url)
    domain = parsed.netloc

    # Handle job page URLs by extracting base domain
    if "/o/" in input_url:
        return f"https://{domain}/api/offers"

    # Direct API URL
    if input_url.endswith("/api/offers"):
        return input_url

    # Homepage URL
    return f"https://{domain}/api/offers"

Implement error handling and caching

Handle common error cases like 404 for non-existent companies and rate limiting. Cache responses since job boards typically update daily and the API returns all jobs in a single request.

Step 5: Implement error handling and caching
import requests
import time
from datetime import datetime, timedelta

def fetch_recruitee_jobs(company_slug: str, use_cache: bool = True) -> dict:
    """Fetch all jobs from a Recruitee company with error handling."""
    url = f"https://{company_slug}.recruitee.com/api/offers"

    try:
        response = requests.get(url, timeout=15)

        if response.status_code == 404:
            raise ValueError(f"Company '{company_slug}' not found on Recruitee")

        if response.status_code == 429:
            retry_after = int(response.headers.get("Retry-After", 60))
            raise Exception(f"Rate limited. Retry after {retry_after} seconds")

        response.raise_for_status()
        return response.json()

    except requests.Timeout:
        raise Exception(f"Request timed out for {company_slug}")
    except requests.RequestException as e:
        raise Exception(f"Failed to fetch jobs: {e}")

# Batch processing with rate limiting
def fetch_multiple_companies(slugs: list[str], delay: float = 0.5) -> dict:
    """Fetch jobs from multiple companies with rate limiting."""
    results = {}
    for slug in slugs:
        try:
            results[slug] = fetch_recruitee_jobs(slug)
            print(f"Fetched {len(results[slug].get('offers', []))} jobs from {slug}")
        except Exception as e:
            print(f"Error fetching {slug}: {e}")
            results[slug] = None
        time.sleep(delay)
    return results
Common issues
highCompany subdomain not found (404 error)

Recruitee has no public company directory. Verify the exact subdomain by checking the company's careers page URL. Some companies use custom domains instead of recruitee.com subdomains.

mediumCustom domain not recognized as Recruitee

Check for the /api/offers endpoint availability and the /o/{slug} URL pattern in job links. These are reliable indicators of a Recruitee-powered careers page.

mediumMissing translations for some jobs

The translations object may not always have an 'en' key. Always implement a fallback to use the first available translation if English is not present.

lowDescription and requirements contain HTML

The description and requirements fields return raw HTML. Use BeautifulSoup or similar to strip tags for plain text, or sanitize before rendering in a browser.

lowSalary field is null for some jobs

Not all companies expose salary information. Always null-check the salary field and fall back to parsing from the description HTML if needed.

mediumRate limiting on bulk requests

Add delays between requests (500ms recommended) when fetching from multiple companies. The API has unofficial rate limits that may trigger on aggressive scraping.

Best practices
  1. 1Use the /api/offers endpoint - it returns all job data in one request
  2. 2Check the structured salary field before parsing from HTML descriptions
  3. 3Handle missing 'en' translations by falling back to first available language
  4. 4Validate custom domains by checking for /api/offers endpoint availability
  5. 5Add 500ms delay between requests when scraping multiple companies
  6. 6Cache responses - the API returns all jobs with no pagination needed
Or skip the complexity

One endpoint. All Recruitee jobs. No scraping, no sessions, no maintenance.

Get API access
cURL
curl "https://enterprise.jobo.world/api/jobs?sources=recruitee" \
  -H "X-Api-Key: YOUR_KEY"
Ready to integrate

Access Recruitee
job data today.

One API call. Structured data. No scraping infrastructure to build or maintain — start with the free tier and scale as you grow.

99.9%API uptime
<200msAvg response
50M+Jobs processed