All platforms

JazzHR Jobs API.

Recruiting software designed for small to mid-sized businesses with HTML-based job boards.

JazzHR
Live
112K+jobs indexed monthly
<3haverage discovery time
1hrefresh interval
Companies using JazzHR
Agil3 Technology SolutionsMALIN+GOETZ4,600+ companies on platform
Developer tools

Try the API.

Test Jobs, Feed, and Auto-Apply endpoints against https://connect.jobo.world with live request/response examples, then copy ready-to-use curl commands.

What's in every response.

Data fields, real-world applications, and the companies already running on JazzHR.

Data fields
  • Small business focus
  • Global XML sitemap feeds
  • Server-side rendered HTML
  • No pagination required
  • Compensation data available
Use cases
  1. 01SMB job market monitoring
  2. 02Company discovery via sitemaps
  3. 03Small business talent sourcing
  4. 04Industry-specific job aggregation
Trusted by
Agil3 Technology SolutionsMALIN+GOETZ4,600+ companies on platform
DIY GUIDE

How to scrape JazzHR.

Step-by-step guide to extracting jobs from JazzHR-powered career pages—endpoints, authentication, and working code.

HTMLintermediateNo official limits (use 1-2 second delays)No auth

Discover companies using global XML sitemaps

JazzHR provides public XML sitemap feeds that aggregate ALL job postings across ALL companies. These feeds contain approximately 112,000+ job URLs from 4,600+ companies, making them ideal for company discovery.

Step 1: Discover companies using global XML sitemaps
import requests
import xml.etree.ElementTree as ET

def discover_jazzhr_companies():
    companies = set()

    # Parse all 5 sitemap feeds (indices 0-4)
    for i in range(5):
        url = f"https://app.jazz.co/feeds/google/xml/{i}"
        response = requests.get(url, timeout=30)
        root = ET.fromstring(response.content)

        # Extract company slugs from URLs
        for url_elem in root.findall('.//{*}loc'):
            job_url = url_elem.text
            # URL format: https://{company}.applytojob.com/apply/{JOB_ID}/...
            if 'applytojob.com' in job_url:
                slug = job_url.split('//')[1].split('.')[0]
                companies.add(slug)

    return list(companies)

companies = discover_jazzhr_companies()
print(f"Found {len(companies)} companies")

Fetch the company job listings page

Make a GET request to the company's job board at applytojob.com. The listings page contains all jobs on a single page with no pagination, and the HTML is server-side rendered.

Step 2: Fetch the company job listings page
import requests
from bs4 import BeautifulSoup

company_slug = "agil3tech"
url = f"https://{company_slug}.applytojob.com/apply"

# Always use HTTPS (some links may use HTTP which can timeout)
response = requests.get(url, timeout=15, allow_redirects=False)

# Check if company is valid (invalid companies redirect away)
if response.status_code in (301, 302):
    redirect_location = response.headers.get('Location', '')
    if 'info.jazzhr.com' in redirect_location or 'jazz.co' in redirect_location:
        print(f"Company '{company_slug}' does not exist")
        exit()

soup = BeautifulSoup(response.text, 'html.parser')
print(f"Successfully fetched job board for {company_slug}")

Parse job listings from HTML

Extract job data from the HTML using CSS selectors. JazzHR uses consistent class names like 'list-group-item' for job containers and 'list-group-item-heading' for titles.

Step 3: Parse job listings from HTML
from bs4 import BeautifulSoup

def parse_job_listings(soup):
    jobs = []

    # Primary selector: list-group structure
    job_items = soup.select('li.list-group-item')

    for item in job_items:
        # Extract job URL and title
        title_elem = item.select_one('h4.list-group-item-heading a')
        if not title_elem:
            title_elem = item.select_one('a[href*="/apply/"]')

        if title_elem:
            job_url = title_elem.get('href', '')
            title = title_elem.get_text(strip=True)

            # Extract location and department
            info_items = item.select('ul.list-inline.list-group-item-text li')
            location = info_items[0].get_text(strip=True) if len(info_items) > 0 else None
            department = info_items[1].get_text(strip=True) if len(info_items) > 1 else None

            # Extract job ID from URL
            import re
            job_id_match = re.search(r'/apply/([A-Za-z0-9]{8,})', job_url)
            job_id = job_id_match.group(1) if job_id_match else None

            jobs.append({
                'id': job_id,
                'title': title,
                'url': job_url,
                'location': location,
                'department': department,
            })

    return jobs

jobs = parse_job_listings(soup)
print(f"Found {len(jobs)} jobs")

Fetch individual job details

For complete job information including full description and compensation data, fetch each job's detail page. The detail page contains all job attributes and the application form.

Step 4: Fetch individual job details
import requests
from bs4 import BeautifulSoup
import re

def fetch_job_details(job_url):
    # Normalize HTTP to HTTPS
    if job_url.startswith('http://'):
        job_url = job_url.replace('http://', 'https://')

    response = requests.get(job_url, timeout=15)
    soup = BeautifulSoup(response.text, 'html.parser')

    details = {}

    # Title
    title_elem = soup.select_one('div.job-header h1, h1')
    details['title'] = title_elem.get_text(strip=True) if title_elem else None

    # Location
    location_elem = soup.select_one("div.job-attributes-container div[title='Location']")
    details['location'] = location_elem.get_text(strip=True) if location_elem else None

    # Description
    desc_elem = soup.select_one('#job-description, .job-details .description')
    details['description'] = desc_elem.get_text(strip=True) if desc_elem else None

    # Compensation (dedicated field)
    comp_elem = soup.select_one("div.job-attributes-container div[title='Compensation'], "
                                 "div.job-attributes-container div[title='Salary'], "
                                 "#resumator-job-salary")
    details['compensation'] = comp_elem.get_text(strip=True) if comp_elem else None

    # If no dedicated compensation field, search in description
    if not details['compensation'] and details['description']:
        salary_match = re.search(r'\$[\d,]+(?:\s*-\s*\$?[\d,]+)?(?:\s*(?:per|/)\s*\w+)?',
                                  details['description'])
        if salary_match:
            details['compensation'] = salary_match.group()

    return details

# Fetch details for first job
if jobs:
    job_details = fetch_job_details(jobs[0]['url'])
    print(job_details)

Handle rate limiting and implement delays

JazzHR doesn't document official rate limits but may block aggressive scraping. Add delays between requests and implement proper error handling to avoid being blocked.

Step 5: Handle rate limiting and implement delays
import requests
import time
from bs4 import BeautifulSoup

def scrape_jazzhr_company(company_slug, delay_seconds=1):
    base_url = f"https://{company_slug}.applytojob.com/apply"

    try:
        # Fetch listings
        response = requests.get(base_url, timeout=15, allow_redirects=False)

        # Validate company exists
        if response.status_code in (301, 302):
            return {'error': 'Company not found', 'jobs': []}

        if response.status_code != 200:
            return {'error': f'HTTP {response.status_code}', 'jobs': []}

        soup = BeautifulSoup(response.text, 'html.parser')
        jobs = parse_job_listings(soup)

        # Fetch details for each job with delay
        for i, job in enumerate(jobs):
            try:
                details = fetch_job_details(job['url'])
                job.update(details)
                print(f"Processed {i+1}/{len(jobs)}: {job['title']}")
            except requests.RequestException as e:
                job['error'] = str(e)

            # Rate limiting delay
            if i < len(jobs) - 1:
                time.sleep(delay_seconds)

        return {'company': company_slug, 'jobs': jobs}

    except requests.RequestException as e:
        return {'error': str(e), 'jobs': []}

# Scrape a company
result = scrape_jazzhr_company("agil3tech", delay_seconds=1.5)
print(f"Total jobs scraped: {len(result['jobs'])}")
Common issues
highCompany subdomain redirects to info.jazzhr.com

Invalid or inactive company subdomains redirect away from applytojob.com. Use allow_redirects=False and check for 301/302 redirects to info.jazzhr.com to detect invalid companies.

mediumHTTP URLs cause connection timeouts

Some job links use HTTP instead of HTTPS which can hang or timeout. Always normalize URLs by converting http:// to https:// for applytojob.com domains.

mediumDifferent HTML templates across companies

JazzHR allows companies to customize their job board appearance. Implement multiple fallback selectors and always check for the standard list-group structure first.

lowJob tokens are opaque and cannot be predicted

Job IDs are 11-character alphanumeric strings (or longer legacy hex strings). You must scrape the listings page first to get job URLs; you cannot construct detail URLs without them.

lowCompensation data not always in dedicated field

Some jobs include salary in the description text instead of a dedicated field. If the compensation selector returns nothing, use regex to search for salary patterns in the description.

mediumRate limiting or blocking from aggressive scraping

Add delays of 1-2 seconds between requests. Use rotating user agents and consider residential proxies for large-scale scraping operations.

Best practices
  1. 1Use global XML sitemaps for company discovery (indices 0-4)
  2. 2Always normalize HTTP URLs to HTTPS for applytojob.com
  3. 3Check for redirects to info.jazzhr.com to detect invalid companies
  4. 4Add 1-2 second delays between requests to avoid rate limiting
  5. 5Implement fallback selectors for different company templates
  6. 6Search description text for compensation if dedicated field is empty
Or skip the complexity

One endpoint. All JazzHR jobs. No scraping, no sessions, no maintenance.

Get API access
cURL
curl "https://enterprise.jobo.world/api/jobs?sources=jazzhr" \
  -H "X-Api-Key: YOUR_KEY"
Ready to integrate

Access JazzHR
job data today.

One API call. Structured data. No scraping infrastructure to build or maintain — start with the free tier and scale as you grow.

99.9%API uptime
<200msAvg response
50M+Jobs processed