JOIN Jobs API.
Modern ATS platform using Next.js with public REST APIs for job listings and detailed job information.
Try the API.
Test Jobs, Feed, and Auto-Apply endpoints against https://connect.jobo.world with live request/response examples, then copy ready-to-use curl commands.
What's in every response.
Data fields, real-world applications, and the companies already running on JOIN.
- Hiring automation
- Candidate sourcing
- Pipeline management
- Interview scheduling
- Analytics
- Multilingual support
- Markdown job descriptions
- 01European job board monitoring
- 02Multilingual job aggregation
- 03Startup talent sourcing
- 04Salary data extraction
How to scrape JOIN.
Step-by-step guide to extracting jobs from JOIN-powered career pages—endpoints, authentication, and working code.
import requests
import json
from bs4 import BeautifulSoup
company_slug = "marswalk"
url = f"https://join.com/companies/{company_slug}"
response = requests.get(url)
soup = BeautifulSoup(response.text, "html.parser")
# Find the __NEXT_DATA__ script tag
script_tag = soup.find("script", id="__NEXT_DATA__")
next_data = json.loads(script_tag.string)
# Extract the company ID
company_id = next_data["props"]["pageProps"]["initialState"]["company"]["id"]
print(f"Company ID: {company_id}")import requests
company_id = 98520 # From step 1
url = f"https://join.com/api/public/companies/{company_id}/jobs"
params = {
"locale": "en-us",
"page": 1,
"pageSize": 100 # Max items per page
}
response = requests.get(url, params=params)
data = response.json()
jobs = data["items"]
pagination = data["pagination"]
print(f"Found {pagination['rowCount']} total jobs across {pagination['pageCount']} pages")for job in jobs:
# Convert salary from cents to actual amount
salary_from = job.get("salaryAmountFrom", {}).get("amount", 0) / 100
salary_to = job.get("salaryAmountTo", {}).get("amount", 0) / 100
currency = job.get("salaryAmountFrom", {}).get("currency", "EUR")
job_info = {
"id": job["id"],
"idParam": job.get("idParam"),
"title": job["title"],
"location": job.get("city", {}).get("cityName"),
"country": job.get("city", {}).get("countryName"),
"workplace_type": job.get("workplaceType"), # ONSITE, REMOTE, HYBRID
"category": job.get("category", {}).get("name"),
"employment_type": job.get("employmentType", {}).get("name"),
"salary_range": f"{salary_from:,.0f} - {salary_to:,.0f} {currency}" if salary_from else None,
"created_at": job.get("createdAt"),
}
print(job_info)import requests
job_id = 15565770 # From step 2
url = f"https://join.com/api/public/jobs/{job_id}"
response = requests.get(url)
job = response.json()
# Extract full job details
full_job = {
"id": job["id"],
"title": job["title"],
"description": job.get("description"), # Full markdown description
"intro": job.get("intro"),
"tasks": job.get("tasks"),
"requirements": job.get("requirements"),
"benefits": job.get("benefits"),
"outro": job.get("outro"),
"contact_name": job.get("contactName"),
"contact_email": job.get("contactEmail"),
"status": job.get("status"), # ONLINE, OFFLINE, ARCHIVED
}
print(f"Job: {full_job['title']}")
print(f"Status: {full_job['status']}")import requests
def fetch_all_jobs(company_id: int, page_size: int = 100) -> list:
all_jobs = []
page = 1
while True:
url = f"https://join.com/api/public/companies/{company_id}/jobs"
params = {"locale": "en-us", "page": page, "pageSize": page_size}
response = requests.get(url, params=params)
data = response.json()
all_jobs.extend(data["items"])
# Check if we've fetched all pages
if page >= data["pagination"]["pageCount"]:
break
page += 1
return all_jobs
# Fetch all jobs for a company
jobs = fetch_all_jobs(98520)
print(f"Retrieved {len(jobs)} total jobs")The numeric company ID must be extracted from the __NEXT_DATA__ script tag on the company page. It cannot be derived from the URL slug alone.
All salary values in the API are in cents. Divide by 100 to get the actual amount in the specified currency.
The description, tasks, requirements, and benefits fields are returned in markdown format. Use a markdown parser if you need HTML output.
Filter jobs by checking that status === 'ONLINE'. Jobs with status 'OFFLINE' or 'ARCHIVED' should be excluded from active listings.
Always include the locale parameter (e.g., 'en-us') in API requests to ensure consistent job listings and descriptions in the expected language.
- 1Extract company ID from __NEXT_DATA__ before fetching job listings
- 2Use pageSize=100 to minimize the number of API calls for pagination
- 3Divide salary amounts by 100 to convert from cents to actual currency
- 4Filter jobs by status === 'ONLINE' to exclude inactive listings
- 5Include locale parameter for consistent multilingual results
- 6Cache company IDs to avoid repeated page parsing for the same company
One endpoint. All JOIN jobs. No scraping, no sessions, no maintenance.
Get API accesscurl "https://enterprise.jobo.world/api/jobs?sources=join" \
-H "X-Api-Key: YOUR_KEY" Access JOIN
job data today.
One API call. Structured data. No scraping infrastructure to build or maintain — start with the free tier and scale as you grow.