CareerPlug Jobs API.
Hiring software for small businesses with applicant tracking and onboarding tools.
Try the API.
Test Jobs, Feed, and Auto-Apply endpoints against https://connect.jobo.world with live request/response examples, then copy ready-to-use curl commands.
What's in every response.
Data fields, real-world applications, and the companies already running on CareerPlug.
- SMB-focused ATS
- Applicant tracking
- Onboarding tools
- Mobile-friendly job boards
- Indeed integration
- 01Small business hiring
- 02Retail staffing
- 03Franchise recruitment
- 04Hourly worker sourcing
How to scrape CareerPlug.
Step-by-step guide to extracting jobs from CareerPlug-powered career pages—endpoints, authentication, and working code.
import requests
from bs4 import BeautifulSoup
company_slug = "bemobile"
url = f"https://{company_slug}.careerplug.com/jobs"
headers = {
"User-Agent": "Mozilla/5.0 (compatible; JobScraper/1.0)",
"Accept": "text/html",
}
response = requests.get(url, headers=headers, timeout=10)
response.raise_for_status()
soup = BeautifulSoup(response.text, "html.parser")
print(f"Fetched page: {response.url}")# Find the job container (supports both modern and legacy layouts)
job_container = soup.select_one("#job_table, #job-list")
if job_container:
job_links = job_container.select("div > a[href^='/jobs/']")
jobs = []
for link in job_links:
title_elem = link.select_one(".job-title .name")
location_elem = link.select_one(".job-location")
jobs.append({
"title": title_elem.get_text(strip=True) if title_elem else None,
"location": location_elem.get_text(strip=True) if location_elem else None,
"url": f"https://{company_slug}.careerplug.com{link['href']}",
"job_id": link['href'].split('/')[-1],
})
print(f"Found {len(jobs)} jobs")
for job in jobs[:3]:
print(f" - {job['title']} @ {job['location']}")def fetch_job_details(job_url: str) -> dict:
"""Fetch full job details from a CareerPlug job page."""
response = requests.get(job_url, headers=headers, timeout=10)
response.raise_for_status()
soup = BeautifulSoup(response.text, "html.parser")
# Extract job details
title = soup.select_one("h1")
description = soup.select_one(".trix-content, .job-description, [class*='description']")
location_info = soup.select_one("h1 + p, [class*='location']")
return {
"title": title.get_text(strip=True) if title else None,
"description": description.get_text(strip=True) if description else None,
"description_html": str(description) if description else None,
"location": location_info.get_text(strip=True) if location_info else None,
"url": job_url,
}
# Fetch details for the first job
if jobs:
details = fetch_job_details(jobs[0]["url"])
print(f"Title: {details['title']}")
print(f"Location: {details['location']}")
print(f"Description length: {len(details['description'] or '')} chars")import time
def fetch_all_jobs(company_slug: str, max_pages: int = 10) -> list:
"""Fetch all jobs from a CareerPlug company with pagination."""
all_jobs = []
base_url = f"https://{company_slug}.careerplug.com/jobs"
for page in range(1, max_pages + 1):
url = f"{base_url}?page={page}" if page > 1 else base_url
response = requests.get(url, headers=headers, timeout=10)
if response.status_code == 404:
break
soup = BeautifulSoup(response.text, "html.parser")
job_container = soup.select_one("#job_table, #job-list")
if not job_container:
break
job_links = job_container.select("div > a[href^='/jobs/']")
if not job_links:
break
for link in job_links:
title_elem = link.select_one(".job-title .name")
location_elem = link.select_one(".job-location")
all_jobs.append({
"title": title_elem.get_text(strip=True) if title_elem else None,
"location": location_elem.get_text(strip=True) if location_elem else None,
"url": f"https://{company_slug}.careerplug.com{link['href']}",
})
print(f"Page {page}: Found {len(job_links)} jobs (total: {len(all_jobs)})")
time.sleep(1) # Be respectful
return all_jobs
jobs = fetch_all_jobs("bemobile")
print(f"Total jobs collected: {len(jobs)}")CareerPlug uses subdomain-based company identification. Verify the correct subdomain by checking the company's careers page URL or searching for patterns like '{company}.careerplug.com'.
Do not use the /jobs.js endpoint as it is explicitly disallowed in robots.txt. Use the standard HTML /jobs page instead and parse the content with BeautifulSoup.
Direct job URLs (/jobs/{id}) redirect to application pages (/jobs/{id}/apps/new). The full job description is embedded in the HTML of the application page, so parse it from there.
CareerPlug formats locations as '{State}-{City}-{Zip}' (e.g., 'NH-Keene-03431'). Split the string by hyphens to extract individual components.
CareerPlug supports multiple themes. Use fallback selectors like '#job_table, #job-list' for containers and multiple description selectors to handle variations.
CareerPlug has no REST or GraphQL API. All job data must be extracted via HTML parsing. The /jobs.js endpoint returns JavaScript code, not JSON.
- 1Use the HTML /jobs endpoint instead of the blocked /jobs.js endpoint
- 2Implement fallback selectors to handle different CareerPlug themes (#job_table, #job-list)
- 3Add 1-2 second delays between requests to be respectful
- 4Parse the location format '{State}-{City}-{Zip}' for structured data
- 5Handle URL redirects when fetching individual job details
- 6Cache results as job boards typically update daily
- 7Use S3 sitemaps for company discovery if needed
One endpoint. All CareerPlug jobs. No scraping, no sessions, no maintenance.
Get API accesscurl "https://enterprise.jobo.world/api/jobs?sources=careerplug" \
-H "X-Api-Key: YOUR_KEY" Access CareerPlug
job data today.
One API call. Structured data. No scraping infrastructure to build or maintain — start with the free tier and scale as you grow.