Breezy Jobs API.
Modern recruiting software for small to mid-sized companies with a clean public JSON API.
Try the API.
Test Jobs, Feed, and Auto-Apply endpoints against https://connect.jobo.world with live request/response examples, then copy ready-to-use curl commands.
What's in every response.
Data fields, real-world applications, and the companies already running on Breezy.
- SMB coverage
- Clean JSON API
- Full job descriptions
- Salary data
- Remote work detection
- Multi-location support
- Single endpoint scraping
- 01SMB job tracking
- 02Startup recruiting data
- 03Remote job aggregation
- 04Salary benchmarking
How to scrape Breezy.
Step-by-step guide to extracting jobs from Breezy-powered career pages—endpoints, authentication, and working code.
import re
def extract_company_slug(careers_url: str) -> str | None:
"""Extract company slug from a Breezy HR URL."""
pattern = r"https?://([^.]+).breezy.hr"
match = re.search(pattern, careers_url)
return match.group(1) if match else None
# Example usage
url = "https://new-incentives.breezy.hr/"
slug = extract_company_slug(url)
print(f"Company slug: {slug}") # Output: new-incentivesimport requests
def fetch_breezy_jobs(company_slug: str) -> list[dict]:
"""Fetch all jobs from a Breezy HR company."""
url = f"https://{company_slug}.breezy.hr/json"
params = {"verbose": "true"}
response = requests.get(url, params=params, timeout=10)
response.raise_for_status()
return response.json()
# Example usage
jobs = fetch_breezy_jobs("new-incentives")
print(f"Found {len(jobs)} active jobs")def parse_job(job: dict) -> dict:
"""Parse a Breezy job object into a clean format."""
location = job.get("location", {}) or {}
return {
"id": job.get("id"),
"title": job.get("name"),
"department": job.get("department"),
"location": location.get("name", "Not specified"),
"is_remote": location.get("is_remote", False),
"employment_type": job.get("type", {}).get("name"),
"salary": job.get("salary"),
"description_html": job.get("description", ""),
"url": job.get("url"),
"published_date": job.get("published_date"),
"company_name": job.get("company", {}).get("name"),
}
# Parse all jobs
parsed_jobs = [parse_job(job) for job in jobs]
for job in parsed_jobs[:3]:
print(f"{job['title']} - {job['location']}")import requests
def validate_company(company_slug: str) -> dict | None:
"""Validate if a Breezy HR company exists."""
url = f"https://{company_slug}.breezy.hr/json"
params = {"verbose": "false"}
try:
response = requests.get(url, params=params, timeout=10)
if response.status_code == 404:
return None
response.raise_for_status()
jobs = response.json()
# Get company name from first job if available
if jobs:
return {"name": jobs[0].get("company", {}).get("name"), "job_count": len(jobs)}
return {"name": None, "job_count": 0}
except requests.RequestException:
return None
# Example usage
result = validate_company("new-incentives")
if result:
print(f"Valid company with {result['job_count']} jobs")
else:
print("Company not found")import time
import requests
def fetch_with_retry(company_slug: str, max_retries: int = 3) -> list[dict]:
"""Fetch jobs with retry logic and rate limiting."""
url = f"https://{company_slug}.breezy.hr/json"
params = {"verbose": "true"}
for attempt in range(max_retries):
try:
response = requests.get(url, params=params, timeout=10)
if response.status_code == 404:
print(f"Company '{company_slug}' not found")
return []
if response.status_code == 429:
wait_time = (attempt + 1) * 2
print(f"Rate limited, waiting {wait_time}s...")
time.sleep(wait_time)
continue
response.raise_for_status()
return response.json()
except requests.RequestException as e:
if attempt == max_retries - 1:
print(f"Failed after {max_retries} attempts: {e}")
return []
time.sleep(attempt + 1)
return []
# Rate limit between companies
companies = ["new-incentives", "duolingo"]
for company in companies:
jobs = fetch_with_retry(company)
print(f"{company}: {len(jobs)} jobs")
time.sleep(1) # Be respectful between requestsAlways include the verbose=true parameter in your request URL. Without it, Breezy returns minimal job data without descriptions. Use: ?verbose=true
The company slug may be incorrect or the company may have migrated to another ATS. Verify the slug by checking the actual careers page URL. Some companies use custom domains that redirect to Breezy.
Breezy doesn't publish rate limits. Implement exponential backoff starting with 1-2 second delays between requests. Consider caching results to reduce API calls.
Breezy fields are optional and depend on what each company fills out. Always use null checks and provide fallback values. Use .get() with defaults when accessing nested properties.
Location can be null, a simple object, or contain nested country/remote_details. Check if location exists before accessing nested properties and handle the is_remote field for remote work detection.
Breezy has no company directory API. Maintain your own list of company slugs discovered through web crawling, job board scraping, or manual entry. Use Wayback Machine or Common Crawl for discovery.
- 1Always use verbose=true to get complete job descriptions
- 2Use verbose=false for lightweight company validation
- 3Add 1-2 second delays between requests to avoid rate limiting
- 4Cache results as job boards typically update daily
- 5Handle null values gracefully for optional fields like department and salary
- 6Extract company slug from the subdomain pattern for URL construction
One endpoint. All Breezy jobs. No scraping, no sessions, no maintenance.
Get API accesscurl "https://enterprise.jobo.world/api/jobs?sources=breezy" \
-H "X-Api-Key: YOUR_KEY" Access Breezy
job data today.
One API call. Structured data. No scraping infrastructure to build or maintain — start with the free tier and scale as you grow.