All platforms

ApplicantPro Jobs API.

Applicant tracking system designed for small and medium businesses, now branded as isolved Talent Acquisition with a public JSON API.

ApplicantPro
Live
30K+jobs indexed monthly
<3haverage discovery time
1hrefresh interval
Companies using ApplicantPro
Harvard Bioscienceisolved customersSmall and medium businesses
Developer tools

Try the API.

Test Jobs, Feed, and Auto-Apply endpoints against https://connect.jobo.world with live request/response examples, then copy ready-to-use curl commands.

What's in every response.

Data fields, real-world applications, and the companies already running on ApplicantPro.

Data fields
  • SMB focus
  • Easy setup
  • Clean job data
  • Company branding
  • Application management
  • Public JSON API
  • No authentication required
  • Comprehensive salary data
  • HTML and plain text descriptions
Use cases
  1. 01SMB job aggregation
  2. 02Small business recruiting
  3. 03Mid-market talent acquisition
  4. 04Multi-company job discovery
Trusted by
Harvard Bioscienceisolved customersSmall and medium businesses
DIY GUIDE

How to scrape ApplicantPro.

Step-by-step guide to extracting jobs from ApplicantPro-powered career pages—endpoints, authentication, and working code.

RESTintermediateNo official limits documented, recommend 1-2 requests/secondNo auth

Extract the site ID from the careers page

ApplicantPro requires a site ID (domain_id) embedded in the page HTML. Fetch the careers page and extract this ID using regex before making API calls.

Step 1: Extract the site ID from the careers page
import requests
import re
from urllib.parse import urlparse

def get_site_id(subdomain: str) -> str | None:
    url = f"https://{subdomain}.applicantpro.com/jobs/"
    response = requests.get(url, timeout=10)
    response.raise_for_status()

    # Extract domain_id from embedded JavaScript
    match = re.search(r'"domain_id"\s*:\s*"(d+)"', response.text)
    if match:
        return match.group(1)
    return None

site_id = get_site_id("harvardbioscience")
print(f"Site ID: {site_id}")  # Output: Site ID: 11099

Fetch all job listings

Use the listings API endpoint with the site ID to retrieve all active jobs. The getParams query parameter must be a URL-encoded JSON object with display options.

Step 2: Fetch all job listings
import requests
import json
from urllib.parse import quote

subdomain = "harvardbioscience"
site_id = "11099"

get_params = {
    "isInternal": 0,
    "showLocation": 1,
    "showEmploymentType": 1,
    "chatToApplyButton": "0"
}

# URL-encode the JSON params
encoded_params = quote(json.dumps(get_params))
listings_url = f"https://{subdomain}.applicantpro.com/core/jobs/{site_id}?getParams={encoded_params}"

headers = {
    "Accept": "application/json",
    "Referer": f"https://{subdomain}.applicantpro.com/jobs/"
}
response = requests.get(listings_url, headers=headers, timeout=10)
data = response.json()

jobs = data.get("data", {}).get("jobs", [])
job_count = data.get("data", {}).get("jobCount", 0)
print(f"Found {len(jobs)} jobs (API reports {job_count} total)")

Parse job metadata from listings

Extract job fields from the listings response. The API returns comprehensive metadata including salary, location, department, and dates, but NOT job descriptions.

Step 3: Parse job metadata from listings
for job in jobs:
    print({
        "id": job.get("id"),
        "title": job.get("title"),
        "location": job.get("jobLocation"),
        "city": job.get("city"),
        "state": job.get("stateName"),
        "country": job.get("iso3"),
        "department": job.get("orgTitle"),
        "classification": job.get("classification"),
        "employment_type": job.get("employmentType"),
        "workplace_type": job.get("workplaceType"),
        "pay_type": job.get("payType"),
        "pay_details": job.get("payDetails"),
        "min_salary": job.get("minSalary"),
        "max_salary": job.get("maxSalary"),
        "job_url": job.get("jobUrl"),
        "posted_date": job.get("startDateRef"),
        "expiry_date": job.get("endDateRef"),
    })

Fetch job details for descriptions

Make a separate API call for each job to get the full description. The details endpoint returns both HTML and plain text descriptions, plus benefits information not available in listings.

Step 4: Fetch job details for descriptions
import time

def get_job_details(subdomain: str, site_id: str, job_id: int) -> dict:
    url = f"https://{subdomain}.applicantpro.com/core/jobs/{site_id}/{job_id}/job-details"
    headers = {
        "Accept": "application/json",
        "Referer": f"https://{subdomain}.applicantpro.com/jobs/{job_id}"
    }
    response = requests.get(url, headers=headers, timeout=10)
    response.raise_for_status()
    return response.json().get("data", {})

# Fetch details for first job with rate limiting
if jobs:
    details = get_job_details(subdomain, site_id, jobs[0]["id"])
    print({
        "id": details.get("id"),
        "title": details.get("title"),
        "city": details.get("city"),
        "description_html": details.get("advertisingDescriptionHtml", "")[:200],
        "description_plain": details.get("advertisingDescription", "")[:200],
        "benefits": details.get("benefits"),
        "zip_code": details.get("jobBoardZip"),
        "pay_details": details.get("payDetails"),
    })
    time.sleep(0.5)  # Be respectful with rate limiting

Handle edge cases and errors

Handle missing site IDs, empty job lists, and various date formats returned by the API. Dates can appear as 'Jan 23, 2026' or '23-Jan-2026'.

Step 5: Handle edge cases and errors
def safe_extract(subdomain: str) -> list[dict]:
    try:
        site_id = get_site_id(subdomain)
        if not site_id:
            print(f"Could not find site ID for {subdomain}")
            return []

        get_params = {"isInternal": 0, "showLocation": 1}
        encoded_params = quote(json.dumps(get_params))
        url = f"https://{subdomain}.applicantpro.com/core/jobs/{site_id}?getParams={encoded_params}"
        response = requests.get(url, headers={"Accept": "application/json"}, timeout=10)
        response.raise_for_status()

        data = response.json()
        if not data.get("success"):
            print(f"API returned error for {subdomain}")
            return []

        return data.get("data", {}).get("jobs", [])
    except requests.RequestException as e:
        print(f"Request failed for {subdomain}: {e}")
        return []

jobs = safe_extract("harvardbioscience")

Discover companies via sitemap

Use the global sitemap index to discover all ApplicantPro-powered companies. This is useful for building a comprehensive job database across multiple organizations.

Step 6: Discover companies via sitemap
import requests
import xml.etree.ElementTree as ET

# Global sitemap index lists all ApplicantPro companies
sitemap_index_url = "https://feeds.applicantpro.com/site_map_index.xml"
response = requests.get(sitemap_index_url, timeout=10)
root = ET.fromstring(response.content)

# Extract company sitemap URLs
namespaces = {"ns": "http://www.sitemaps.org/schemas/sitemap/0.9"}
company_sitemaps = []
for sitemap in root.findall("ns:sitemap", namespaces):
    loc = sitemap.find("ns:loc", namespaces)
    if loc is not None:
        company_sitemaps.append(loc.text)

print(f"Found {len(company_sitemaps)} company sitemaps")

# Parse individual company sitemap for job URLs
def parse_company_sitemap(sitemap_url: str) -> list[str]:
    response = requests.get(sitemap_url, timeout=10)
    root = ET.fromstring(response.content)
    job_urls = []
    for url in root.findall("ns:url", namespaces):
        loc = url.find("ns:loc", namespaces)
        if loc is not None and "/jobs/" in loc.text:
            job_urls.append(loc.text)
    return job_urls

# Example: Get jobs from first company sitemap
if company_sitemaps:
    jobs = parse_company_sitemap(company_sitemaps[0])
    print(f"Found {len(jobs)} job URLs in first sitemap")
Common issues
criticalSite ID (domain_id) not found in page HTML

The page structure may have changed. Try alternative regex patterns or look for the domain_id in script tags within the courierCurrentRouteData object. Some companies use custom domains that redirect to ApplicantPro.

highJob descriptions missing from listings response

The listings API does not include descriptions. You must make a separate API call to the job-details endpoint for each job to get the full description and benefits.

mediumCustom domain redirects not handled

Some companies use custom domains that redirect to ApplicantPro. Follow redirects and extract the actual subdomain from the final URL before extracting the site ID.

lowEmpty jobs array returned

Some companies may have no active postings. Check the jobCount field in the response and handle empty arrays gracefully in your code.

lowInconsistent date formats between endpoints

Dates appear in different formats across endpoints (e.g., 'Jan 23, 2026' vs '23-Jan-2026'). Use flexible date parsing libraries like dateutil to handle both formats.

mediumRate limiting or temporary blocks

While there are no official limits documented, unthrottled requests may trigger temporary blocks. Add delays of 0.5-1 second between detail requests for large batches.

lowSalary values are empty or malformed

Salary fields (minSalary, maxSalary) may be empty strings. Always check for truthy values before parsing. Pay details are often in payDetails as text rather than structured data.

Best practices
  1. 1Cache the site ID per subdomain to avoid repeated page fetches
  2. 2Use advertisingDescriptionHtml over plain text for better formatting
  3. 3Add 0.5-1 second delays between detail requests to avoid rate limiting
  4. 4Handle both date formats returned by different endpoints (e.g., 'Jan 23, 2026' vs '23-Jan-2026')
  5. 5Use the jobUrl field from the API response when available instead of constructing URLs
  6. 6Fetch benefits information from the details endpoint - not available in listings
  7. 7Include Referer headers matching the job board URL for better compatibility
  8. 8Use the global sitemap index at feeds.applicantpro.com for company discovery
Or skip the complexity

One endpoint. All ApplicantPro jobs. No scraping, no sessions, no maintenance.

Get API access
cURL
curl "https://enterprise.jobo.world/api/jobs?sources=applicantpro" \
  -H "X-Api-Key: YOUR_KEY"
Ready to integrate

Access ApplicantPro
job data today.

One API call. Structured data. No scraping infrastructure to build or maintain — start with the free tier and scale as you grow.

99.9%API uptime
<200msAvg response
50M+Jobs processed