All platforms

SmartRecruiters Jobs API.

Enterprise recruiting platform used by global companies for talent acquisition.

SmartRecruiters
Live
200K+jobs indexed monthly
<3haverage discovery time
1hrefresh interval
Companies using SmartRecruiters
VisaBoschMcDonald'sServiceNowEtsy
Developer tools

Try the API.

Test Jobs, Feed, and Auto-Apply endpoints against https://connect.jobo.world with live request/response examples, then copy ready-to-use curl commands.

What's in every response.

Data fields, real-world applications, and the companies already running on SmartRecruiters.

Data fields
  • Global job coverage
  • Multi-language support
  • Enterprise companies
  • Structured data
  • Location details
  • Hybrid work info
Use cases
  1. 01Global job aggregation
  2. 02Multi-national tracking
  3. 03Enterprise recruiting
Trusted by
VisaBoschMcDonald'sServiceNowEtsyCERNOECD
DIY GUIDE

How to scrape SmartRecruiters.

Step-by-step guide to extracting jobs from SmartRecruiters-powered career pages—endpoints, authentication, and working code.

RESTintermediate10 requests per second recommended (unofficial)No auth

Extract the company identifier

First, you need to find the company identifier. This can be extracted from career page URLs like jobs.smartrecruiters.com/{companyIdentifier} or careers.smartrecruiters.com/{companyIdentifier}.

Step 1: Extract the company identifier
import re

def extract_company_id(url: str) -> str:
    """Extract company identifier from SmartRecruiters URL."""
    patterns = [
        r'jobs\.smartrecruiters\.com/([^/?]+)',
        r'careers\.smartrecruiters\.com/([^/?]+)',
        r'api\.smartrecruiters\.com/v1/companies/([^/?]+)',
    ]

    for pattern in patterns:
        match = re.search(pattern, url)
        if match:
            return match.group(1)

    raise ValueError(f'Could not extract company ID from {url}')

# Example usage
url = "https://jobs.smartrecruiters.com/Visa"
company_id = extract_company_id(url)
print(f"Company ID: {company_id}")  # Output: Visa

Fetch job listings with pagination

Use the public API to fetch job listings. The API supports up to 100 results per request with offset-based pagination. Use totalFound to determine when to stop paginating.

Step 2: Fetch job listings with pagination
import requests
import time

def fetch_all_jobs(company_id: str, page_size: int = 100) -> list[dict]:
    """Fetch all job listings for a company with pagination."""
    all_jobs = []
    offset = 0
    base_url = f"https://api.smartrecruiters.com/v1/companies/{company_id}/postings"

    while True:
        params = {"limit": page_size, "offset": offset}
        response = requests.get(base_url, params=params, timeout=30)
        response.raise_for_status()

        data = response.json()
        jobs = data.get("content", [])
        all_jobs.extend(jobs)

        total_found = data.get("totalFound", 0)
        print(f"Fetched {len(jobs)} jobs (total: {total_found}, offset: {offset})")

        # Check if we've fetched all jobs
        if len(all_jobs) >= total_found or len(jobs) < page_size:
            break

        offset += page_size
        time.sleep(0.1)  # Be respectful with rate limiting

    return all_jobs

# Example usage
jobs = fetch_all_jobs("AFCA")
print(f"Total jobs found: {len(jobs)}")

Fetch full job details

The listings endpoint returns basic job info without full descriptions. Fetch complete job details including the jobAd sections by calling the details endpoint for each job ID.

Step 3: Fetch full job details
import requests

def fetch_job_details(company_id: str, job_id: str) -> dict:
    """Fetch full details for a specific job posting."""
    url = f"https://api.smartrecruiters.com/v1/companies/{company_id}/postings/{job_id}"

    response = requests.get(url, timeout=30)
    response.raise_for_status()

    job = response.json()

    return {
        "id": job.get("id"),
        "title": job.get("name"),
        "posting_url": job.get("postingUrl"),
        "apply_url": job.get("applyUrl"),
        "description": job.get("jobAd", {}).get("sections", {}).get("jobDescription", {}).get("text", ""),
        "qualifications": job.get("jobAd", {}).get("sections", {}).get("qualifications", {}).get("text", ""),
        "company_description": job.get("jobAd", {}).get("sections", {}).get("companyDescription", {}).get("text", ""),
        "location": job.get("location", {}).get("fullLocation"),
        "city": job.get("location", {}).get("city"),
        "country": job.get("location", {}).get("country"),
        "remote": job.get("location", {}).get("remote", False),
        "hybrid": job.get("location", {}).get("hybrid", False),
        "department": job.get("department", {}).get("label"),
        "employment_type": job.get("typeOfEmployment", {}).get("label"),
        "experience_level": job.get("experienceLevel", {}).get("label"),
        "compensation": job.get("compensation"),
        "posted_date": job.get("releasedDate"),
        "custom_fields": job.get("customField", []),
    }

# Example usage
job = fetch_job_details("AFCA", "744000107175576")
print(f"Job: {job['title']}")
print(f"Location: {job['location']}")

Validate company and check for jobs

Before scraping, validate that a company identifier is valid and has active job postings. The API returns totalFound: 0 for invalid or empty company pages.

Step 4: Validate company and check for jobs
import requests

def validate_company(company_id: str) -> dict:
    """Validate a company identifier and check for active jobs."""
    url = f"https://api.smartrecruiters.com/v1/companies/{company_id}/postings"
    params = {"limit": 1, "offset": 0}

    try:
        response = requests.get(url, params=params, timeout=30)
        response.raise_for_status()
        data = response.json()

        total_found = data.get("totalFound", 0)

        if total_found == 0:
            return {"valid": False, "reason": "No jobs found or invalid company"}

        # Get company name from first posting
        company_name = None
        if data.get("content"):
            company_name = data["content"][0].get("company", {}).get("name")

        return {
            "valid": True,
            "total_jobs": total_found,
            "company_name": company_name,
        }
    except requests.RequestException as e:
        return {"valid": False, "reason": str(e)}

# Example usage
result = validate_company("Visa")
print(result)  # {'valid': True, 'total_jobs': 150, 'company_name': 'Visa'}

result = validate_company("InvalidCompany123")
print(result)  # {'valid': False, 'reason': 'No jobs found or invalid company'}

Handle rate limiting and errors

Implement proper error handling and rate limiting to ensure reliable scraping. The API provides ETag headers for caching and may return 404 for invalid job IDs.

Step 5: Handle rate limiting and errors
import requests
import time
from typing import Optional

def fetch_with_retry(
    url: str,
    params: dict = None,
    max_retries: int = 3,
    delay: float = 0.2
) -> Optional[dict]:
    """Fetch URL with retry logic and rate limiting."""
    for attempt in range(max_retries):
        try:
            time.sleep(delay)  # Rate limiting
            response = requests.get(url, params=params, timeout=30)

            if response.status_code == 404:
                print(f"Resource not found: {url}")
                return None

            if response.status_code == 429:
                # Rate limited - wait longer
                wait_time = 2 ** attempt
                print(f"Rate limited, waiting {wait_time}s...")
                time.sleep(wait_time)
                continue

            response.raise_for_status()
            return response.json()

        except requests.RequestException as e:
            print(f"Attempt {attempt + 1} failed: {e}")
            if attempt == max_retries - 1:
                raise

    return None

# Example usage with caching via ETag
cached_etags = {}

def fetch_with_cache(url: str) -> dict:
    """Fetch with ETag-based caching."""
    headers = {}
    if url in cached_etags:
        headers["If-None-Match"] = cached_etags[url]

    response = requests.get(url, headers=headers, timeout=30)

    if response.status_code == 304:
        print("Using cached response")
        return None  # Return cached data

    if response.ok:
        if "ETag" in response.headers:
            cached_etags[url] = response.headers["ETag"]
        return response.json()

    response.raise_for_status()
Common issues
highRate limit exceeded (429 errors)

Implement exponential backoff and reduce request frequency. Start with 100-200ms delays between requests and increase if you receive 429 errors. The API has unofficial rate limits around 10 requests per second.

mediumCannot find company identifier

Check the career page URL structure. If using a custom domain, inspect network requests in browser DevTools to find API calls containing the company identifier. Some companies use different identifiers than their brand name (e.g., 'TheNielsenCompany' instead of 'Nielsen').

highTruncated or missing job descriptions

The listings endpoint only returns summary data without descriptions. Always fetch individual job details using the /postings/{jobId} endpoint to get the full jobAd.sections content including jobDescription, qualifications, and companyDescription.

lowMissing location or salary data

These fields are optional in SmartRecruiters. Check the customField array for additional location details or compensation information. Not all companies expose salary information in the compensation field.

lowAPI returns empty content for valid company

Some companies may have no active postings (totalFound: 0). This returns 200 OK with empty content array. Validate the company has jobs before proceeding with scraping.

mediumJob ID format varies

Job IDs are typically numeric strings but should be treated as strings, not integers. Some integrations return them as numbers - always convert to string to avoid precision loss with large IDs.

Best practices
  1. 1Use limit=100 for optimal pagination performance
  2. 2Fetch job details separately - listings don't include descriptions
  3. 3Cache results using ETag headers for efficient re-scraping
  4. 4Validate company identifiers before full scraping runs
  5. 5Check the customField array for additional metadata like salary ranges
  6. 6Handle hybrid work info from location.hybrid and hybridDescription fields
Or skip the complexity

One endpoint. All SmartRecruiters jobs. No scraping, no sessions, no maintenance.

Get API access
cURL
curl "https://enterprise.jobo.world/api/jobs?sources=smartrecruiters" \
  -H "X-Api-Key: YOUR_KEY"
Ready to integrate

Access SmartRecruiters
job data today.

One API call. Structured data. No scraping infrastructure to build or maintain — start with the free tier and scale as you grow.

99.9%API uptime
<200msAvg response
50M+Jobs processed