All platforms

Breezy Jobs API.

Modern recruiting software for small to mid-sized companies with a clean public JSON API.

Breezy
Live
50K+jobs indexed monthly
<3haverage discovery time
1hrefresh interval
Companies using Breezy
New IncentivesDuolingoSmall & medium businesses
Developer tools

Try the API.

Test Jobs, Feed, and Auto-Apply endpoints against https://connect.jobo.world with live request/response examples, then copy ready-to-use curl commands.

What's in every response.

Data fields, real-world applications, and the companies already running on Breezy.

Data fields
  • SMB coverage
  • Clean JSON API
  • Full job descriptions
  • Salary data
  • Remote work detection
  • Multi-location support
  • Single endpoint scraping
Use cases
  1. 01SMB job tracking
  2. 02Startup recruiting data
  3. 03Remote job aggregation
  4. 04Salary benchmarking
Trusted by
New IncentivesDuolingoSmall & medium businesses
DIY GUIDE

How to scrape Breezy.

Step-by-step guide to extracting jobs from Breezy-powered career pages—endpoints, authentication, and working code.

RESTbeginnerUnspecified (recommended 1-2 seconds between requests)No auth

Identify the company slug

Breezy HR uses subdomain-based routing. Each company has a unique slug that forms their subdomain (e.g., company.breezy.hr). Extract the slug from the company's careers page URL.

Step 1: Identify the company slug
import re

def extract_company_slug(careers_url: str) -> str | None:
    """Extract company slug from a Breezy HR URL."""
    pattern = r"https?://([^.]+).breezy.hr"
    match = re.search(pattern, careers_url)
    return match.group(1) if match else None

# Example usage
url = "https://new-incentives.breezy.hr/"
slug = extract_company_slug(url)
print(f"Company slug: {slug}")  # Output: new-incentives

Fetch all job listings

Use the JSON endpoint with verbose=true to retrieve all active jobs with full descriptions in a single request. This is the most efficient way to get complete job data.

Step 2: Fetch all job listings
import requests

def fetch_breezy_jobs(company_slug: str) -> list[dict]:
    """Fetch all jobs from a Breezy HR company."""
    url = f"https://{company_slug}.breezy.hr/json"
    params = {"verbose": "true"}

    response = requests.get(url, params=params, timeout=10)
    response.raise_for_status()

    return response.json()

# Example usage
jobs = fetch_breezy_jobs("new-incentives")
print(f"Found {len(jobs)} active jobs")

Parse job details from response

Extract the fields you need from each job object. The API returns full HTML descriptions, location data, salary information, and employment type in a well-structured format.

Step 3: Parse job details from response
def parse_job(job: dict) -> dict:
    """Parse a Breezy job object into a clean format."""
    location = job.get("location", {}) or {}

    return {
        "id": job.get("id"),
        "title": job.get("name"),
        "department": job.get("department"),
        "location": location.get("name", "Not specified"),
        "is_remote": location.get("is_remote", False),
        "employment_type": job.get("type", {}).get("name"),
        "salary": job.get("salary"),
        "description_html": job.get("description", ""),
        "url": job.get("url"),
        "published_date": job.get("published_date"),
        "company_name": job.get("company", {}).get("name"),
    }

# Parse all jobs
parsed_jobs = [parse_job(job) for job in jobs]
for job in parsed_jobs[:3]:
    print(f"{job['title']} - {job['location']}")

Validate company slug before scraping

Use the verbose=false parameter for a lightweight validation check. This returns basic info without descriptions, useful for verifying a company exists before doing a full scrape.

Step 4: Validate company slug before scraping
import requests

def validate_company(company_slug: str) -> dict | None:
    """Validate if a Breezy HR company exists."""
    url = f"https://{company_slug}.breezy.hr/json"
    params = {"verbose": "false"}

    try:
        response = requests.get(url, params=params, timeout=10)
        if response.status_code == 404:
            return None
        response.raise_for_status()
        jobs = response.json()
        # Get company name from first job if available
        if jobs:
            return {"name": jobs[0].get("company", {}).get("name"), "job_count": len(jobs)}
        return {"name": None, "job_count": 0}
    except requests.RequestException:
        return None

# Example usage
result = validate_company("new-incentives")
if result:
    print(f"Valid company with {result['job_count']} jobs")
else:
    print("Company not found")

Handle rate limiting and errors

Breezy doesn't publish rate limits but may block aggressive requests. Implement retry logic with exponential backoff and handle common HTTP errors gracefully.

Step 5: Handle rate limiting and errors
import time
import requests

def fetch_with_retry(company_slug: str, max_retries: int = 3) -> list[dict]:
    """Fetch jobs with retry logic and rate limiting."""
    url = f"https://{company_slug}.breezy.hr/json"
    params = {"verbose": "true"}

    for attempt in range(max_retries):
        try:
            response = requests.get(url, params=params, timeout=10)

            if response.status_code == 404:
                print(f"Company '{company_slug}' not found")
                return []

            if response.status_code == 429:
                wait_time = (attempt + 1) * 2
                print(f"Rate limited, waiting {wait_time}s...")
                time.sleep(wait_time)
                continue

            response.raise_for_status()
            return response.json()

        except requests.RequestException as e:
            if attempt == max_retries - 1:
                print(f"Failed after {max_retries} attempts: {e}")
                return []
            time.sleep(attempt + 1)

    return []

# Rate limit between companies
companies = ["new-incentives", "duolingo"]
for company in companies:
    jobs = fetch_with_retry(company)
    print(f"{company}: {len(jobs)} jobs")
    time.sleep(1)  # Be respectful between requests
Common issues
criticalMissing job descriptions in API response

Always include the verbose=true parameter in your request URL. Without it, Breezy returns minimal job data without descriptions. Use: ?verbose=true

high404 Not Found when accessing the JSON endpoint

The company slug may be incorrect or the company may have migrated to another ATS. Verify the slug by checking the actual careers page URL. Some companies use custom domains that redirect to Breezy.

mediumGetting blocked with 429 Too Many Requests

Breezy doesn't publish rate limits. Implement exponential backoff starting with 1-2 second delays between requests. Consider caching results to reduce API calls.

lowMissing optional fields like department or salary

Breezy fields are optional and depend on what each company fills out. Always use null checks and provide fallback values. Use .get() with defaults when accessing nested properties.

mediumLocation data structure varies between jobs

Location can be null, a simple object, or contain nested country/remote_details. Check if location exists before accessing nested properties and handle the is_remote field for remote work detection.

mediumCannot discover all Breezy companies programmatically

Breezy has no company directory API. Maintain your own list of company slugs discovered through web crawling, job board scraping, or manual entry. Use Wayback Machine or Common Crawl for discovery.

Best practices
  1. 1Always use verbose=true to get complete job descriptions
  2. 2Use verbose=false for lightweight company validation
  3. 3Add 1-2 second delays between requests to avoid rate limiting
  4. 4Cache results as job boards typically update daily
  5. 5Handle null values gracefully for optional fields like department and salary
  6. 6Extract company slug from the subdomain pattern for URL construction
Or skip the complexity

One endpoint. All Breezy jobs. No scraping, no sessions, no maintenance.

Get API access
cURL
curl "https://enterprise.jobo.world/api/jobs?sources=breezy" \
  -H "X-Api-Key: YOUR_KEY"
Ready to integrate

Access Breezy
job data today.

One API call. Structured data. No scraping infrastructure to build or maintain — start with the free tier and scale as you grow.

99.9%API uptime
<200msAvg response
50M+Jobs processed