All platforms

Polymer Jobs API.

Modern ATS platform with clean JSON APIs, popular with Y Combinator startups and tech companies.

Polymer
Live
50K+jobs indexed monthly
<3haverage discovery time
1hrefresh interval
Companies using Polymer
Violet LabsY Combinator startups
Developer tools

Try the API.

Test Jobs, Feed, and Auto-Apply endpoints against https://connect.jobo.world with live request/response examples, then copy ready-to-use curl commands.

What's in every response.

Data fields, real-world applications, and the companies already running on Polymer.

Data fields
  • Clean REST API
  • Salary information
  • Remote work details
  • Job category data
  • Application questions
  • Custom domain support
Use cases
  1. 01Startup job tracking
  2. 02Tech talent sourcing
  3. 03Remote job monitoring
  4. 04Salary data extraction
Trusted by
Violet LabsY Combinator startups
DIY GUIDE

How to scrape Polymer.

Step-by-step guide to extracting jobs from Polymer-powered career pages—endpoints, authentication, and working code.

RESTintermediateNo explicit limits observedNo auth

Identify the API endpoint

Polymer uses two types of domains: the main jobs.polymer.co (Cloudflare protected) and custom domains like jobs.company.com (direct access). Prefer custom domains when available for faster scraping without Cloudflare bypass.

Step 1: Identify the API endpoint
import requests

# Custom domain (recommended - no Cloudflare)
org_slug = "violet-labs"
base_url = f"https://jobs.violetlabs.com"

# Main domain (requires FlareSolverr for Cloudflare bypass)
# base_url = f"https://jobs.polymer.co/{org_slug}"

listings_url = f"{base_url}/api/v1/public/organizations/{org_slug}/jobs"
print(f"Listings endpoint: {listings_url}")

Fetch job listings

Call the listings API to retrieve all job metadata. Note that this endpoint does NOT include job descriptions - only title, location, salary, and other metadata fields.

Step 2: Fetch job listings
import requests

org_slug = "violet-labs"
base_url = "https://jobs.violetlabs.com"
url = f"{base_url}/api/v1/public/organizations/{org_slug}/jobs"

params = {"page": 1}
headers = {"Accept": "application/json"}

response = requests.get(url, params=params, headers=headers)
data = response.json()

jobs = data.get("items", [])
meta = data.get("meta", {})

print(f"Found {meta.get('total', 0)} jobs across {meta.get('count', 0)} pages")
print(f"Is last page: {meta.get('is_last', True)}")

Parse job metadata

Extract the available fields from the listings response. Key fields include salary_pretty, remoteness_pretty, and job_category_name.

Step 3: Parse job metadata
for job in jobs:
    print({
        "id": job.get("id"),
        "title": job.get("title"),
        "location": job.get("display_location"),
        "remote": job.get("remoteness_pretty"),
        "employment_type": job.get("kind_pretty"),
        "salary": job.get("salary_pretty"),
        "category": job.get("job_category_name"),
        "url": job.get("job_post_url"),
        "published_at": job.get("published_at"),
    })

Fetch job descriptions

Since the listings API doesn't include descriptions, you must make a separate API call for each job to get the full HTML description and application questions.

Step 4: Fetch job descriptions
import requests
import time

base_url = "https://jobs.violetlabs.com"
org_slug = "violet-labs"

def get_job_details(job_id: int) -> dict:
    url = f"{base_url}/api/v1/public/organizations/{org_slug}/jobs/{job_id}"
    response = requests.get(url)
    return response.json()

# Fetch details for each job
for job in jobs[:5]:  # Limit for example
    details = get_job_details(job["id"])
    print(f"Title: {details.get('title')}")
    print(f"Description length: {len(details.get('description', ''))}")
    print(f"Questions: {len(details.get('questions', []))}")
    time.sleep(0.5)  # Be respectful

Handle pagination

Use the meta.is_last field to check if more pages exist. Increment the page parameter until you've fetched all jobs.

Step 5: Handle pagination
import requests

def fetch_all_jobs(base_url: str, org_slug: str) -> list:
    all_jobs = []
    page = 1

    while True:
        url = f"{base_url}/api/v1/public/organizations/{org_slug}/jobs"
        params = {"page": page}

        response = requests.get(url, params=params)
        data = response.json()

        jobs = data.get("items", [])
        meta = data.get("meta", {})

        all_jobs.extend(jobs)

        if meta.get("is_last", True):
            break

        page += 1

    return all_jobs

jobs = fetch_all_jobs("https://jobs.violetlabs.com", "violet-labs")
print(f"Total jobs fetched: {len(jobs)}")
Common issues
criticalCloudflare Turnstile protection on jobs.polymer.co

Use FlareSolverr or a similar Cloudflare bypass tool for the main domain. Alternatively, find and use the company's custom domain (e.g., jobs.company.com) which typically has no protection.

highMissing job descriptions in listings response

The listings API only returns metadata. You must make a separate API call to the job details endpoint for each job to get the full description HTML.

mediumCustom domain detection and URL construction

Check if the URL uses jobs.polymer.co (main domain) or a custom domain like jobs.company.com. Extract the org-slug from the URL path and construct API URLs accordingly.

lowOrganization slug mismatch between URL and API

The org-slug in the URL may differ from the API endpoint. Always verify by checking the meta.organization_name field in the API response.

mediumRate limiting on high-volume requests

Although no explicit rate limits are documented, add delays between requests (500ms-1s) and implement exponential backoff on errors.

Best practices
  1. 1Prefer custom domains over jobs.polymer.co to avoid Cloudflare bypass overhead
  2. 2Cache job listings and only fetch descriptions for new or updated jobs
  3. 3Use the meta.is_last field for pagination rather than guessing total pages
  4. 4Add 500ms-1s delay between detail requests to be respectful
  5. 5Extract org-slug from job_post_url field if the API slug differs from URL slug
  6. 6Handle missing salary_pretty gracefully as not all jobs include compensation data
Or skip the complexity

One endpoint. All Polymer jobs. No scraping, no sessions, no maintenance.

Get API access
cURL
curl "https://enterprise.jobo.world/api/jobs?sources=polymer" \
  -H "X-Api-Key: YOUR_KEY"
Ready to integrate

Access Polymer
job data today.

One API call. Structured data. No scraping infrastructure to build or maintain — start with the free tier and scale as you grow.

99.9%API uptime
<200msAvg response
50M+Jobs processed