All platforms

ADP Jobs API.

Enterprise HR and payroll platform with two distinct job board systems: MyJobs (full details API) and Workforce Now (listings + details APIs).

ADP
Live
100K+jobs indexed monthly
<3haverage discovery time
1hrefresh interval
Companies using ADP
Guitar CenterRick Case Auto GroupRocket Companies
Developer tools

Try the API.

Test Jobs, Feed, and Auto-Apply endpoints against https://connect.jobo.world with live request/response examples, then copy ready-to-use curl commands.

What's in every response.

Data fields, real-world applications, and the companies already running on ADP.

Data fields
  • Enterprise coverage
  • HCM integration
  • Global companies
  • Full job descriptions
  • Multi-location support
  • Two platform variants
  • OData-style queries
  • Geo-coordinates
Use cases
  1. 01Enterprise job aggregation
  2. 02Large company monitoring
  3. 03Retail chain job tracking
  4. 04HR system integration
  5. 05Multi-location hiring analysis
Trusted by
Guitar CenterRick Case Auto GroupRocket Companies
DIY GUIDE

How to scrape ADP.

Step-by-step guide to extracting jobs from ADP-powered career pages—endpoints, authentication, and working code.

RESTintermediate~100 requests/minute with 0.5-1s delays between requestsAuth required

Identify the ADP platform variant

ADP operates two distinct job board platforms with different APIs. MyJobs (myjobs.adp.com) returns full details in one call, while Workforce Now (workforcenow.adp.com) requires separate detail requests. Identify the platform first to choose the correct approach.

Step 1: Identify the ADP platform variant
import re
from urllib.parse import urlparse

def identify_adp_platform(url: str) -> str:
    """Identify which ADP platform a URL belongs to."""
    if 'workforcenow.adp.com' in url:
        return 'workforce-now'
    elif 'myjobs.adp.com' in url:
        return 'myjobs'
    return 'unknown'

# Test the function
url = "https://myjobs.adp.com/guitarcenterexternal/cx"
print(f"Platform: {identify_adp_platform(url)}")  # Output: myjobs

Extract company identifier from URL

Each platform uses different identifiers. MyJobs uses a company slug in the URL path (e.g., 'guitarcenterexternal'), while Workforce Now uses a UUID 'cid' query parameter.

Step 2: Extract company identifier from URL
from urllib.parse import urlparse, parse_qs

def extract_company_id(url: str, platform: str) -> str:
    """Extract company identifier based on platform type."""
    parsed = urlparse(url)

    if platform == 'myjobs':
        # Extract from path: /guitarcenterexternal/cx
        path_parts = parsed.path.strip('/').split('/')
        return path_parts[0] if path_parts else None
    elif platform == 'workforce-now':
        # Extract cid from query parameters (UUID format)
        params = parse_qs(parsed.query)
        return params.get('cid', [None])[0]
    return None

# MyJobs example
url1 = "https://myjobs.adp.com/guitarcenterexternal/cx"
print(f"MyJobs company: {extract_company_id(url1, 'myjobs')}")

# Workforce Now example
url2 = "https://workforcenow.adp.com/mascsr/default/mdf/recruitment/recruitment.html?cid=1edfe7b1-c2a4-4b67-921f-8b32dfaed4bb"
print(f"Workforce Now cid: {extract_company_id(url2, 'workforce-now')}")

Fetch MyJobs career site configuration

For MyJobs, fetch the career site configuration to obtain the authentication token (myJobsToken). This token is company-specific and required for all job listings API calls.

Step 3: Fetch MyJobs career site configuration
import requests

def get_myjobs_config(company_id: str) -> dict:
    """Fetch career site configuration and authentication token."""
    url = f"https://myjobs.adp.com/public/staffing/v1/career-site/{company_id}"

    response = requests.get(url, timeout=30)

    if response.status_code == 404:
        raise ValueError(f"Company '{company_id}' not found")

    response.raise_for_status()
    config = response.json()

    return {
        "domain": config.get("domain"),
        "client_name": config.get("clientName"),
        "orgoid": config.get("orgoid"),
        "myjobs_token": config.get("myJobsToken"),
    }

# Example usage
config = get_myjobs_config("guitarcenterexternal")
print(f"Company: {config['client_name']}")
print(f"Token preview: {config['myjobs_token'][:50]}...")

Fetch MyJobs listings with full details

MyJobs returns complete job descriptions in a single API call. Use OData-style pagination with $top and $skip parameters. The response includes jobDescription and jobQualifications HTML fields.

Step 4: Fetch MyJobs listings with full details
import requests
import time

def fetch_myjobs_listings(company_id: str, token: str, page_size: int = 100) -> list:
    """Fetch all job listings with full details from MyJobs API."""
    url = "https://my.adp.com/myadp_prefix/mycareer/public/staffing/v1/job-requisitions/apply-custom-filters"

    headers = {
        "accept": "application/json, text/plain, */*",
        "origin": "https://myjobs.adp.com",
        "referer": "https://myjobs.adp.com/",
        "myjobstoken": token,
    }

    all_jobs = []
    skip = 0

    while True:
        params = {
            "$orderby": "postingDate desc",
            "$select": "reqId,jobTitle,publishedJobTitle,type,jobDescription,jobQualifications,workLevelCode,clientRequisitionID,postingDate,requisitionLocations",
            "$top": page_size,
            "$skip": skip,
            "tz": "America/New_York",
        }

        response = requests.get(url, headers=headers, params=params, timeout=30)
        response.raise_for_status()
        data = response.json()

        jobs = data.get("jobRequisitions", [])
        if not jobs:
            break

        all_jobs.extend(jobs)
        print(f"Fetched {len(all_jobs)} of {data.get('count', 'unknown')} jobs")

        if len(jobs) < page_size:
            break

        skip += page_size
        time.sleep(0.5)  # Rate limiting

    return all_jobs

# Example usage
jobs = fetch_myjobs_listings("guitarcenterexternal", config["myjobs_token"])
print(f"Retrieved {len(jobs)} total jobs")

Fetch Workforce Now listings and details

Workforce Now requires two API calls: first fetch listings (without descriptions), then fetch details for each job. Extract the ExternalJobID from customFieldGroup.stringFields for detail requests.

Step 5: Fetch Workforce Now listings and details
import requests
import time

def fetch_workforcenow_jobs(cid: str) -> list:
    """Fetch jobs from Workforce Now (listings + details per job)."""
    base_url = "https://workforcenow.adp.com/mascsr/default/careercenter/public/events/staffing/v1"

    headers = {
        "accept": "application/json",
        "content-type": "application/json",
        "locale": "en_US",
    }

    # Step 1: Fetch listings
    listings_url = f"{base_url}/job-requisitions"
    params = {"cid": cid, "lang": "en_US", "locale": "en_US", "$top": 1000}

    response = requests.get(listings_url, headers=headers, params=params, timeout=30)
    response.raise_for_status()
    data = response.json()

    jobs = []
    for req in data.get("jobRequisitions", []):
        # Extract ExternalJobID from custom fields
        external_id = None
        for field in req.get("customFieldGroup", {}).get("stringFields", []):
            if field.get("nameCode", {}).get("codeValue") == "ExternalJobID":
                external_id = field.get("stringValue")
                break

        if not external_id:
            continue

        # Step 2: Fetch job details
        detail_url = f"{base_url}/job-requisitions/{external_id}"
        detail_resp = requests.get(detail_url, headers=headers, params={"cid": cid, "lang": "en_US", "locale": "en_US"}, timeout=30)

        if detail_resp.ok:
            detail = detail_resp.json()
            jobs.append({
                "id": external_id,
                "title": detail.get("requisitionTitle"),
                "description_html": detail.get("requisitionDescription"),
                "employment_type": detail.get("workLevelCode", {}).get("shortName"),
                "posting_date": detail.get("postDate"),
                "url": f"https://workforcenow.adp.com/mascsr/default/mdf/recruitment/recruitment.html?cid={cid}&selectedMenuKey=CurrentOpenings&jobId={external_id}",
            })
            time.sleep(0.3)  # Rate limiting

    return jobs

# Example usage
jobs = fetch_workforcenow_jobs("1edfe7b1-c2a4-4b67-921f-8b32dfaed4bb")
print(f"Retrieved {len(jobs)} jobs with full details")

Parse and structure job data

Extract key fields from the API responses. Handle multi-location jobs, HTML descriptions, and missing fields gracefully. MyJobs provides richer location data with geo-coordinates.

Step 6: Parse and structure job data
def parse_myjobs_job(job: dict, company_id: str) -> dict:
    """Parse a MyJobs listing into structured format."""
    locations = job.get("requisitionLocations", [])
    primary_loc = next((l for l in locations if l.get("primaryIndicator")), locations[0] if locations else {})

    address = primary_loc.get("address", {})

    return {
        "id": job.get("reqId"),
        "title": job.get("publishedJobTitle") or job.get("jobTitle"),
        "description_html": job.get("jobDescription"),
        "qualifications_html": job.get("jobQualifications"),
        "employment_type": job.get("workLevelCode"),
        "posting_date": job.get("postingDate"),
        "location": {
            "city": address.get("cityName"),
            "state": address.get("countrySubdivisionLevel1", {}).get("codeValue"),
            "postal_code": address.get("postalCode"),
            "address": address.get("lineOne"),
            "coordinates": address.get("geoCoordinate"),
        },
        "url": f"https://myjobs.adp.com/{company_id}/cx/job/{job.get('reqId')}",
        "easy_apply": job.get("easyApplyEnabled", False),
    }

# Parse and display sample jobs
for job in jobs[:3]:
    parsed = parse_myjobs_job(job, "guitarcenterexternal")
    print(f"- {parsed['title']} ({parsed['employment_type']})")
    if parsed['location'].get('city'):
        print(f"  Location: {parsed['location']['city']}, {parsed['location']['state']}")
    print(f"  URL: {parsed['url']}")
Common issues
criticalMultiple platform variants with incompatible APIs

Always identify the platform type (Workforce Now vs MyJobs) by checking the URL domain first. MyJobs returns full details in listings; Workforce Now requires separate detail requests per job.

highMissing authentication token for MyJobs API

MyJobs requires fetching myJobsToken from the career-site config endpoint before making listings requests. Call /public/staffing/v1/career-site/{companyId} first and cache the token.

highCompany identifier not found (404 error)

Verify the company identifier from the actual careers page URL. For MyJobs, the identifier is the first path segment before /cx. For Workforce Now, extract the cid UUID from query parameters.

mediumToken expiration during long scraping sessions

The myJobsToken may expire during extended sessions. If you receive 401 errors, re-fetch the career site configuration to obtain a fresh token and resume scraping.

mediumRate limiting and 429 errors

ADP has rate limiting protections. Add 0.5-1 second delays between requests. Implement exponential backoff for 429 responses: wait 2^attempt seconds before retrying.

lowEmpty requisitionLocations array

Some jobs have empty location arrays. Check if locations exist before accessing. As a fallback, extract location information from the jobDescription HTML using regex or BeautifulSoup.

mediumWorkforce Now missing ExternalJobID field

Some Workforce Now jobs may lack the ExternalJobID in customFieldGroup.stringFields. Skip these jobs or use itemID as a fallback identifier for detail requests.

Best practices
  1. 1Identify the platform variant (MyJobs vs Workforce Now) before implementing - they require different approaches
  2. 2Cache the myJobsToken from career-site config - it's stable and reusable across multiple requests
  3. 3Use OData pagination ($top and $skip) for MyJobs to handle large job boards efficiently
  4. 4Implement 0.5-1 second delays between requests to avoid rate limiting
  5. 5For Workforce Now, parallelize detail requests with appropriate throttling to improve performance
  6. 6Handle empty location arrays by extracting location from job description HTML as fallback
  7. 7Re-fetch authentication tokens if you receive 401 errors during long scraping sessions
Or skip the complexity

One endpoint. All ADP jobs. No scraping, no sessions, no maintenance.

Get API access
cURL
curl "https://enterprise.jobo.world/api/jobs?sources=adp" \
  -H "X-Api-Key: YOUR_KEY"
Ready to integrate

Access ADP
job data today.

One API call. Structured data. No scraping infrastructure to build or maintain — start with the free tier and scale as you grow.

99.9%API uptime
<200msAvg response
50M+Jobs processed