Homerun Jobs API.
Modern ATS platform with design-focused career pages on .homerun.co subdomains. No public API - uses sitemap-based job discovery.
Try the API.
Test Jobs, Feed, and Auto-Apply endpoints against https://connect.jobo.world with live request/response examples, then copy ready-to-use curl commands.
What's in every response.
Data fields, real-world applications, and the companies already running on Homerun.
- Design-focused career pages
- Sitemap-based discovery
- Atom feed support
- Multi-language jobs (nl, en)
- Server-side rendering
- No authentication required
- Lastmod timestamps for incremental updates
- 01European job board aggregation
- 02Dutch company job monitoring
- 03Design-focused startup tracking
- 04Multi-language job extraction
How to scrape Homerun.
Step-by-step guide to extracting jobs from Homerun-powered career pages—endpoints, authentication, and working code.
import requests
import xml.etree.ElementTree as ET
company_slug = "woonstad-rotterdam"
sitemap_url = f"https://{company_slug}.homerun.co/sitemap.xml"
response = requests.get(sitemap_url, timeout=10)
response.raise_for_status()
# Parse XML sitemap
root = ET.fromstring(response.content)
namespace = {"sm": "http://www.sitemaps.org/schemas/sitemap/0.9"}
urls = [loc.text for loc in root.findall(".//sm:loc", namespace)]
print(f"Found {len(urls)} URLs in sitemap")# Filter job URLs from sitemap
job_urls = [
url for url in urls
if not url.endswith("/apply") # Exclude apply pages
and url.split("/")[-1] != "" # Exclude homepage
and len(url.split("/")) > 4 # Has job slug in path
]
# Also extract lastmod dates for incremental updates
job_data = []
for url_elem in root.findall(".//sm:url", namespace):
loc = url_elem.find("sm:loc", namespace)
lastmod = url_elem.find("sm:lastmod", namespace)
if loc is not None and not loc.text.endswith("/apply"):
job_data.append({
"url": loc.text,
"lastmod": lastmod.text if lastmod is not None else None
})
print(f"Found {len(job_urls)} job URLs")
for url in job_urls[:5]:
print(f" - {url}")from bs4 import BeautifulSoup
def parse_job_details(url):
response = requests.get(url, timeout=10)
soup = BeautifulSoup(response.text, "html.parser")
# Extract job title from h1 in main content
title_elem = soup.select_one("main h1")
title = title_elem.get_text(strip=True) if title_elem else "Unknown"
# Extract full job description from main element
main_content = soup.select_one("main")
description_html = str(main_content) if main_content else ""
description_text = main_content.get_text(strip=True) if main_content else ""
# Extract any metadata available
employment_type = None
location = None
# Look for common patterns in job cards
for tag in soup.select("main a, main div"):
text = tag.get_text(strip=True).lower()
if any(t in text for t in ["fulltime", "parttime", "full-time", "part-time"]):
employment_type = text.title()
break
return {
"url": url,
"title": title,
"description_text": description_text[:500],
"description_html": description_html,
"employment_type": employment_type,
"location": location,
}
# Parse first job as example
if job_urls:
job = parse_job_details(job_urls[0])
print(f"Title: {job['title']}")
print(f"Type: {job['employment_type']}")def fetch_atom_feed(company_slug):
feed_url = f"https://feed.homerun.co/{company_slug}"
try:
response = requests.get(feed_url, timeout=10)
response.raise_for_status()
except requests.RequestException:
print(f"Atom feed not available for {company_slug}")
return []
soup = BeautifulSoup(response.text, "xml")
jobs = []
for entry in soup.find_all("entry"):
title_elem = entry.find("title")
summary_elem = entry.find("summary")
link_elem = entry.find("link")
updated_elem = entry.find("updated")
job = {
"title": title_elem.get_text(strip=True) if title_elem else None,
"summary": summary_elem.get_text(strip=True)[:300] if summary_elem else None,
"url": link_elem.get("href") if link_elem else None,
"updated": updated_elem.get_text(strip=True) if updated_elem else None,
}
# Extract custom namespace elements if available
# Homerun may include department, location, salary in extensions
jobs.append(job)
return jobs
# Example usage
atom_jobs = fetch_atom_feed("breeze")
print(f"Found {len(atom_jobs)} jobs from Atom feed")
for job in atom_jobs[:3]:
print(f" - {job['title']}")import time
def scrape_all_jobs(job_urls, delay=1.0):
"""Scrape all jobs with rate limiting and error handling."""
jobs = []
for i, url in enumerate(job_urls, 1):
try:
print(f"Scraping {i}/{len(job_urls)}: {url}")
job = parse_job_details(url)
jobs.append(job)
time.sleep(delay) # Respectful rate limiting
except requests.RequestException as e:
print(f"Network error on {url}: {e}")
continue
except Exception as e:
print(f"Parse error on {url}: {e}")
continue
return jobs
# Scrape with 1 second delay between requests
all_jobs = scrape_all_jobs(job_urls, delay=1.0)
print(f"Successfully scraped {len(all_jobs)} jobs")Homerun uses the .homerun.co domain for all company career pages. Always use https://{company}.homerun.co format. The .hr extension is for a different service.
Not all company slugs are valid. Verify the company subdomain exists by checking if the homepage (https://{company}.homerun.co/) returns HTTP 200 before attempting to scrape jobs.
Homerun does not provide a central sitemap listing all companies. Use web searches (site:*.homerun.co), monitor Homerun's customer showcase page, or check DNS records to find company subdomains.
Some companies use custom domains (e.g., careers.company.com) that redirect to homerun.co. Follow redirects and extract the actual homerun.co URL for consistent scraping. Use allow_redirects=True in requests.
The sitemap shows base URLs without language codes. Jobs may appear in multiple languages at /en, /nl, etc. Deduplicate by extracting the job slug from the URL and storing only one language variant.
Not all companies have Atom feeds enabled. Always check the response status code before parsing. Fall back to sitemap + HTML parsing if the feed is unavailable.
- 1Use sitemap.xml as primary discovery method - single request finds all jobs
- 2Check the Atom feed at feed.homerun.co first for richer structured data
- 3Implement 1-2 second delays between requests to be respectful
- 4Use sitemap lastmod dates for incremental updates instead of re-scraping
- 5Handle language variants by specifying preferred language (en, nl, etc.)
- 6Verify company slugs exist before bulk scraping to avoid 404 errors
One endpoint. All Homerun jobs. No scraping, no sessions, no maintenance.
Get API accesscurl "https://enterprise.jobo.world/api/jobs?sources=homerun" \
-H "X-Api-Key: YOUR_KEY" Access Homerun
job data today.
One API call. Structured data. No scraping infrastructure to build or maintain — start with the free tier and scale as you grow.