All platforms

HireHive Jobs API.

ATS designed for small to medium businesses with international reach and public JSON API.

HireHive
Live
20K+jobs indexed monthly
<3haverage discovery time
1hrefresh interval
Companies using HireHive
JUNGLESmall and medium businesses
Developer tools

Try the API.

Test Jobs, Feed, and Auto-Apply endpoints against https://connect.jobo.world with live request/response examples, then copy ready-to-use curl commands.

What's in every response.

Data fields, real-world applications, and the companies already running on HireHive.

Data fields
  • SMB-focused platform
  • International job support
  • Public JSON API
  • Full descriptions in API
  • Pagination support
  • Filtering by category/location
Use cases
  1. 01SMB job discovery
  2. 02International talent sourcing
  3. 03Regional job market analysis
  4. 04Multi-language job boards
Trusted by
JUNGLESmall and medium businesses
DIY GUIDE

How to scrape HireHive.

Step-by-step guide to extracting jobs from HireHive-powered career pages—endpoints, authentication, and working code.

RESTbeginnerNo official limit; use 500ms delays between requestsNo auth

Identify the company subdomain

HireHive uses company-specific subdomains for job boards. The pattern is {company}.hirehive.com. Find this by checking company careers pages or searching for hirehive.com URLs.

Step 1: Identify the company subdomain
# HireHive URL patterns:
# Job Board: https://{company}.hirehive.com
# API v2: https://{company}.hirehive.com/api/v2/jobs
# API v1: https://{company}.hirehive.com/api/v1/jobs

company = "jungle"

Fetch jobs from the API

Use the /api/v2/jobs endpoint to retrieve all active jobs with full descriptions in a single request. This is much faster and more reliable than HTML scraping.

Step 2: Fetch jobs from the API
import requests

company = "jungle"
url = f"https://{company}.hirehive.com/api/v2/jobs"

response = requests.get(url, timeout=10)
response.raise_for_status()
data = response.json()

print(f"Found {data['meta']['total_items']} jobs across {data['meta']['total_pages']} pages")

Parse job details from the response

Extract the fields you need from each job item. The API returns full HTML and plain text descriptions, location data, employment type, and category information.

Step 3: Parse job details from the response
for job in data["items"]:
    print({
        "id": job["id"],
        "title": job["title"],
        "location": job.get("location"),
        "country": job.get("country", {}).get("name"),
        "category": job.get("category", {}).get("name"),
        "employment_type": job.get("type", {}).get("name"),
        "description_html": job.get("description", {}).get("html", "")[:200],
        "description_text": job.get("description", {}).get("text", "")[:200],
        "apply_url": job.get("hosted_url"),
        "published_date": job.get("published_date"),
    })

Handle pagination

For job boards with more than 30 jobs, use the pagination metadata to fetch all pages. The default page size is 30 items.

Step 4: Handle pagination
import requests
import time

def fetch_all_hirehive_jobs(company: str, page_size: int = 30) -> list:
    base_url = f"https://{company}.hirehive.com/api/v2/jobs"
    all_jobs = []
    page = 1

    while True:
        params = {"page": page, "page_size": page_size}
        response = requests.get(base_url, params=params, timeout=10)
        response.raise_for_status()
        data = response.json()

        all_jobs.extend(data["items"])

        if not data["meta"]["has_next_page"]:
            break

        page += 1
        time.sleep(0.5)  # Be respectful to the API

    return all_jobs

jobs = fetch_all_hirehive_jobs("jungle")
print(f"Total jobs fetched: {len(jobs)}")

Filter and search jobs

The API supports query parameters for filtering by category, location, employment type, and search queries.

Step 5: Filter and search jobs
import requests

company = "jungle"
base_url = f"https://{company}.hirehive.com/api/v2/jobs"

# Filter by category
params = {"category": "Staff"}
response = requests.get(base_url, params=params, timeout=10)
staff_jobs = response.json()["items"]

# Filter by location
params = {"location": "Madrid"}
response = requests.get(base_url, params=params, timeout=10)
madrid_jobs = response.json()["items"]

# Search by keyword
params = {"q": "legal"}
response = requests.get(base_url, params=params, timeout=10)
legal_jobs = response.json()["items"]

print(f"Staff jobs: {len(staff_jobs)}, Madrid jobs: {len(madrid_jobs)}, Legal jobs: {len(legal_jobs)}")
Common issues
highUsing HTML scraping instead of API

HireHive exposes /api/v2/jobs which returns complete job data including descriptions in a single request. Use this instead of HTML parsing which requires multiple page fetches.

mediumCompany subdomain not found

HireHive has no public directory. Find subdomains by checking company careers pages, searching for 'hirehive.com' URLs, or looking for links in job postings on aggregators.

mediumRate limiting on large job boards

Add delays between pagination requests (500ms recommended). The API has unofficial rate limits that may trigger on rapid consecutive requests.

lowMissing salary/compensation data

Not all companies expose salary information. The compensation_tiers array may be empty. Check the salary field and handle null values gracefully.

lowLocalized job content

HireHive supports multiple languages. Job titles, descriptions, and category names may be in the local language. The language field indicates the job's language (e.g., 'es-ES' for Spanish).

lowAPI v1 vs v2 differences

Prefer /api/v2/jobs for its standardized pagination structure and metadata. The v1 API uses a different response format with camelCase fields and lacks pagination metadata.

Best practices
  1. 1Use /api/v2/jobs endpoint for complete job data in a single request
  2. 2Always check meta.has_next_page for pagination handling
  3. 3Use the description.text field for plain text or description.html for formatted content
  4. 4Add 500ms delays between pagination requests to avoid rate limiting
  5. 5Cache results - job boards typically update daily or weekly
  6. 6Handle missing salary data gracefully as not all companies provide compensation info
Or skip the complexity

One endpoint. All HireHive jobs. No scraping, no sessions, no maintenance.

Get API access
cURL
curl "https://enterprise.jobo.world/api/jobs?sources=hirehive" \
  -H "X-Api-Key: YOUR_KEY"
Ready to integrate

Access HireHive
job data today.

One API call. Structured data. No scraping infrastructure to build or maintain — start with the free tier and scale as you grow.

99.9%API uptime
<200msAvg response
50M+Jobs processed