All platforms

HiringThing Jobs API.

Applicant tracking system for small to mid-sized businesses with RSS feed support and embedded JSON data.

HiringThing
Live
15K+jobs indexed monthly
<3haverage discovery time
1hrefresh interval
Companies using HiringThing
Demco, Inc.
Developer tools

Try the API.

Test Jobs, Feed, and Auto-Apply endpoints against https://connect.jobo.world with live request/response examples, then copy ready-to-use curl commands.

What's in every response.

Data fields, real-world applications, and the companies already running on HiringThing.

Data fields
  • RSS feed for all jobs
  • Embedded JSON data
  • Full HTML descriptions
  • Salary data in React props
  • Sitemap-based discovery
Use cases
  1. 01SMB job aggregation
  2. 02Company career page monitoring
  3. 03Location-based job filtering
Trusted by
Demco, Inc.
DIY GUIDE

How to scrape HiringThing.

Step-by-step guide to extracting jobs from HiringThing-powered career pages—endpoints, authentication, and working code.

HybridbeginnerNo documented limits (use standard delays)No auth

Fetch the RSS feed

HiringThing provides an RSS feed that contains all job listings with full HTML descriptions. This is the most efficient way to retrieve all jobs in a single request.

Step 1: Fetch the RSS feed
import requests
import xml.etree.ElementTree as ET

company = "demco"
rss_url = f"https://{company}.hiringthing.com/api/rss.xml"

response = requests.get(rss_url)
response.raise_for_status()

root = ET.fromstring(response.content)
channel = root.find("channel")
items = channel.findall("item")

print(f"Found {len(items)} jobs in RSS feed")

Parse job entries from RSS

Extract job data from each RSS item. The feed includes title, URL, location, and full HTML description in the media:description tag.

Step 2: Parse job entries from RSS
import xml.etree.ElementTree as ET

def parse_rss_jobs(rss_content):
    root = ET.fromstring(rss_content)
    channel = root.find("channel")
    jobs = []

    for item in channel.findall("item"):
        title_elem = item.find("title")
        link_elem = item.find("link")
        location_elem = item.find("location")
        description_elem = item.find(".//{http://search.yahoo.com/mrss/}description")

        jobs.append({
            "title": title_elem.text if title_elem is not None else None,
            "url": link_elem.text if link_elem is not None else None,
            "location": location_elem.text if location_elem is not None else None,
            "description_html": description_elem.text if description_elem is not None else None,
        })

    return jobs

jobs = parse_rss_jobs(response.content)
for job in jobs[:3]:
    print(f"{job['title']} - {job['location']}")

Extract embedded JSON from job detail pages

For additional metadata like salary and posted date, fetch individual job pages and parse the data-react-props attribute from the ApplyButtonGroup component.

Step 3: Extract embedded JSON from job detail pages
import requests
import json
import re
from bs4 import BeautifulSoup

def get_job_details(company: str, job_url: str) -> dict:
    full_url = f"https://{company}.hiringthing.com{job_url}" if job_url.startswith("/") else job_url
    response = requests.get(full_url)
    soup = BeautifulSoup(response.text, "html.parser")

    # Find the ApplyButtonGroup component with embedded JSON
    apply_div = soup.find("div", {"data-react-class": "HiringThing.Components.ApplyButtonGroup"})
    if not apply_div:
        return {}

    props_json = apply_div.get("data-react-props", "{}")
    data = json.loads(props_json)
    job_data = data.get("jobObj", {}).get("table", {})

    return {
        "id": job_data.get("id"),
        "title": job_data.get("title"),
        "location": job_data.get("location"),
        "location_info": job_data.get("location_info"),
        "description_html": job_data.get("html_description"),
        "posted_at": job_data.get("posted_at"),
        "company_id": job_data.get("company_id"),
    }

details = get_job_details("demco", "/job/984299/learning-environment-field-consultant")
print(details)

Parse salary data from listings page

Salary information is embedded in the JobSalary React component. Extract it from the data-react-props attribute on the listings page.

Step 4: Parse salary data from listings page
import requests
import json
from bs4 import BeautifulSoup

def extract_salary_data(html_content: str) -> list:
    soup = BeautifulSoup(html_content, "html.parser")
    salaries = []

    salary_divs = soup.find_all("div", {"data-react-class": "HiringThing.Components.JobSalary"})

    for div in salary_divs:
        props = div.get("data-react-props", "{}")
        data = json.loads(props)

        salaries.append({
            "min": data.get("minSalary", {}).get("amount"),
            "max": data.get("maxSalary", {}).get("amount"),
            "currency": data.get("minSalary", {}).get("currency"),
            "frequency": data.get("payFrequency"),
        })

    return salaries

# Fetch listings page
response = requests.get("https://demco.hiringthing.com/")
salaries = extract_salary_data(response.text)

for s in salaries[:3]:
  print(f"${s['min']} - ${s['max']} {s['frequency']}")

Handle errors and rate limiting

Implement proper error handling and rate limiting. While no strict limits are documented, add delays between requests for bulk scraping.

Step 5: Handle errors and rate limiting
import requests
import time
import xml.etree.ElementTree as ET

def scrape_hiringthing_company(company: str, delay: float = 0.5) -> dict:
    rss_url = f"https://{company}.hiringthing.com/api/rss.xml"

    try:
        response = requests.get(rss_url, timeout=30)

        if response.status_code == 404:
            return {"error": f"Company '{company}' not found", "jobs": []}

        response.raise_for_status()

        root = ET.fromstring(response.content)
        channel = root.find("channel")
        items = channel.findall("item")

        jobs = []
        for item in items:
            jobs.append({
                "title": item.find("title").text,
                "url": item.find("link").text,
                "location": getattr(item.find("location"), "text", None),
            })

        return {"company": company, "jobs": jobs, "count": len(jobs)}

    except requests.RequestException as e:
        return {"error": str(e), "jobs": []}

# Scrape multiple companies with rate limiting
companies = ["demco", "example1", "example2"]
for company in companies:
    result = scrape_hiringthing_company(company)
    print(f"{company}: {result.get('count', 0)} jobs")
    time.sleep(delay)
Common issues
mediumCompany uses a custom domain instead of hiringthing.com

Some companies use branded job board URLs like jobs.company.com. Check the company's main website for careers page links, or look for HiringThing branding/URLs in the page source.

mediumRSS feed returns 404 for some companies

Not all companies have RSS feeds enabled. Fall back to HTML scraping by fetching the main listings page and parsing div.job-container elements.

lowEmbedded JSON has Unicode escapes

The data-react-props attribute uses Unicode escaping (e.g., \u003c for <). Python's json.loads() handles this automatically, but be aware when debugging raw strings.

highAuthenticated API endpoint returns 401

The /api/v1/jobs endpoint requires authentication and is for admin users only. Use the public RSS feed or HTML scraping instead.

lowMissing salary information in RSS feed

The RSS feed does not include salary data. Fetch the HTML listings page and parse the data-react-props from JobSalary components for compensation details.

lowJob descriptions contain malformed HTML

The html_description field may contain unclosed tags or invalid HTML. Use a robust HTML parser like BeautifulSoup with the 'html.parser' option to handle malformed markup.

Best practices
  1. 1Use the RSS feed for fastest job retrieval with full descriptions
  2. 2Parse data-react-props attributes for structured salary and metadata
  3. 3Add 0.5-1 second delays between requests for bulk scraping
  4. 4Cache results - job boards typically update daily
  5. 5Fall back to HTML scraping if RSS feed is unavailable
  6. 6Use the global sitemap for discovering all companies on the platform
Or skip the complexity

One endpoint. All HiringThing jobs. No scraping, no sessions, no maintenance.

Get API access
cURL
curl "https://enterprise.jobo.world/api/jobs?sources=hiringthing" \
  -H "X-Api-Key: YOUR_KEY"
Ready to integrate

Access HiringThing
job data today.

One API call. Structured data. No scraping infrastructure to build or maintain — start with the free tier and scale as you grow.

99.9%API uptime
<200msAvg response
50M+Jobs processed