Polymer Jobs API.
Modern ATS platform with clean JSON APIs, popular with Y Combinator startups and tech companies.
Try the API.
Test Jobs, Feed, and Auto-Apply endpoints against https://connect.jobo.world with live request/response examples, then copy ready-to-use curl commands.
What's in every response.
Data fields, real-world applications, and the companies already running on Polymer.
- Clean REST API
- Salary information
- Remote work details
- Job category data
- Application questions
- Custom domain support
- 01Startup job tracking
- 02Tech talent sourcing
- 03Remote job monitoring
- 04Salary data extraction
How to scrape Polymer.
Step-by-step guide to extracting jobs from Polymer-powered career pages—endpoints, authentication, and working code.
import requests
# Custom domain (recommended - no Cloudflare)
org_slug = "violet-labs"
base_url = f"https://jobs.violetlabs.com"
# Main domain (requires FlareSolverr for Cloudflare bypass)
# base_url = f"https://jobs.polymer.co/{org_slug}"
listings_url = f"{base_url}/api/v1/public/organizations/{org_slug}/jobs"
print(f"Listings endpoint: {listings_url}")import requests
org_slug = "violet-labs"
base_url = "https://jobs.violetlabs.com"
url = f"{base_url}/api/v1/public/organizations/{org_slug}/jobs"
params = {"page": 1}
headers = {"Accept": "application/json"}
response = requests.get(url, params=params, headers=headers)
data = response.json()
jobs = data.get("items", [])
meta = data.get("meta", {})
print(f"Found {meta.get('total', 0)} jobs across {meta.get('count', 0)} pages")
print(f"Is last page: {meta.get('is_last', True)}")for job in jobs:
print({
"id": job.get("id"),
"title": job.get("title"),
"location": job.get("display_location"),
"remote": job.get("remoteness_pretty"),
"employment_type": job.get("kind_pretty"),
"salary": job.get("salary_pretty"),
"category": job.get("job_category_name"),
"url": job.get("job_post_url"),
"published_at": job.get("published_at"),
})import requests
import time
base_url = "https://jobs.violetlabs.com"
org_slug = "violet-labs"
def get_job_details(job_id: int) -> dict:
url = f"{base_url}/api/v1/public/organizations/{org_slug}/jobs/{job_id}"
response = requests.get(url)
return response.json()
# Fetch details for each job
for job in jobs[:5]: # Limit for example
details = get_job_details(job["id"])
print(f"Title: {details.get('title')}")
print(f"Description length: {len(details.get('description', ''))}")
print(f"Questions: {len(details.get('questions', []))}")
time.sleep(0.5) # Be respectfulimport requests
def fetch_all_jobs(base_url: str, org_slug: str) -> list:
all_jobs = []
page = 1
while True:
url = f"{base_url}/api/v1/public/organizations/{org_slug}/jobs"
params = {"page": page}
response = requests.get(url, params=params)
data = response.json()
jobs = data.get("items", [])
meta = data.get("meta", {})
all_jobs.extend(jobs)
if meta.get("is_last", True):
break
page += 1
return all_jobs
jobs = fetch_all_jobs("https://jobs.violetlabs.com", "violet-labs")
print(f"Total jobs fetched: {len(jobs)}")Use FlareSolverr or a similar Cloudflare bypass tool for the main domain. Alternatively, find and use the company's custom domain (e.g., jobs.company.com) which typically has no protection.
The listings API only returns metadata. You must make a separate API call to the job details endpoint for each job to get the full description HTML.
Check if the URL uses jobs.polymer.co (main domain) or a custom domain like jobs.company.com. Extract the org-slug from the URL path and construct API URLs accordingly.
The org-slug in the URL may differ from the API endpoint. Always verify by checking the meta.organization_name field in the API response.
Although no explicit rate limits are documented, add delays between requests (500ms-1s) and implement exponential backoff on errors.
- 1Prefer custom domains over jobs.polymer.co to avoid Cloudflare bypass overhead
- 2Cache job listings and only fetch descriptions for new or updated jobs
- 3Use the meta.is_last field for pagination rather than guessing total pages
- 4Add 500ms-1s delay between detail requests to be respectful
- 5Extract org-slug from job_post_url field if the API slug differs from URL slug
- 6Handle missing salary_pretty gracefully as not all jobs include compensation data
One endpoint. All Polymer jobs. No scraping, no sessions, no maintenance.
Get API accesscurl "https://enterprise.jobo.world/api/jobs?sources=polymer" \
-H "X-Api-Key: YOUR_KEY" Access Polymer
job data today.
One API call. Structured data. No scraping infrastructure to build or maintain — start with the free tier and scale as you grow.