All platforms

CareerPlug Jobs API.

Hiring software for small businesses with applicant tracking and onboarding tools.

CareerPlug
Live
25K+jobs indexed monthly
<3haverage discovery time
1hrefresh interval
Companies using CareerPlug
BeMobileRetail franchisesService industry businesses
Developer tools

Try the API.

Test Jobs, Feed, and Auto-Apply endpoints against https://connect.jobo.world with live request/response examples, then copy ready-to-use curl commands.

What's in every response.

Data fields, real-world applications, and the companies already running on CareerPlug.

Data fields
  • SMB-focused ATS
  • Applicant tracking
  • Onboarding tools
  • Mobile-friendly job boards
  • Indeed integration
Use cases
  1. 01Small business hiring
  2. 02Retail staffing
  3. 03Franchise recruitment
  4. 04Hourly worker sourcing
Trusted by
BeMobileRetail franchisesService industry businesses
DIY GUIDE

How to scrape CareerPlug.

Step-by-step guide to extracting jobs from CareerPlug-powered career pages—endpoints, authentication, and working code.

HTMLbeginnerNo official limits; use 1-2 second delays between requestsNo auth

Fetch the job listings page

Request the main jobs page from the company's CareerPlug subdomain. The page contains all job listings rendered as HTML.

Step 1: Fetch the job listings page
import requests
from bs4 import BeautifulSoup

company_slug = "bemobile"
url = f"https://{company_slug}.careerplug.com/jobs"

headers = {
    "User-Agent": "Mozilla/5.0 (compatible; JobScraper/1.0)",
    "Accept": "text/html",
}

response = requests.get(url, headers=headers, timeout=10)
response.raise_for_status()
soup = BeautifulSoup(response.text, "html.parser")
print(f"Fetched page: {response.url}")

Parse job listings from HTML

Extract job titles, locations, and URLs from the job container. CareerPlug uses either #job_table or #job-list containers depending on the theme.

Step 2: Parse job listings from HTML
# Find the job container (supports both modern and legacy layouts)
job_container = soup.select_one("#job_table, #job-list")

if job_container:
    job_links = job_container.select("div > a[href^='/jobs/']")
    jobs = []

    for link in job_links:
        title_elem = link.select_one(".job-title .name")
        location_elem = link.select_one(".job-location")

        jobs.append({
            "title": title_elem.get_text(strip=True) if title_elem else None,
            "location": location_elem.get_text(strip=True) if location_elem else None,
            "url": f"https://{company_slug}.careerplug.com{link['href']}",
            "job_id": link['href'].split('/')[-1],
        })

    print(f"Found {len(jobs)} jobs")
    for job in jobs[:3]:
        print(f"  - {job['title']} @ {job['location']}")

Fetch job details

Request individual job pages to get the full description. Job URLs redirect to application pages that contain the complete job description in HTML.

Step 3: Fetch job details
def fetch_job_details(job_url: str) -> dict:
    """Fetch full job details from a CareerPlug job page."""
    response = requests.get(job_url, headers=headers, timeout=10)
    response.raise_for_status()
    soup = BeautifulSoup(response.text, "html.parser")

    # Extract job details
    title = soup.select_one("h1")
    description = soup.select_one(".trix-content, .job-description, [class*='description']")
    location_info = soup.select_one("h1 + p, [class*='location']")

    return {
        "title": title.get_text(strip=True) if title else None,
        "description": description.get_text(strip=True) if description else None,
        "description_html": str(description) if description else None,
        "location": location_info.get_text(strip=True) if location_info else None,
        "url": job_url,
    }

# Fetch details for the first job
if jobs:
    details = fetch_job_details(jobs[0]["url"])
    print(f"Title: {details['title']}")
    print(f"Location: {details['location']}")
    print(f"Description length: {len(details['description'] or '')} chars")

Handle pagination

CareerPlug uses standard query parameter pagination. Iterate through pages until no more jobs are found.

Step 4: Handle pagination
import time

def fetch_all_jobs(company_slug: str, max_pages: int = 10) -> list:
    """Fetch all jobs from a CareerPlug company with pagination."""
    all_jobs = []
    base_url = f"https://{company_slug}.careerplug.com/jobs"

    for page in range(1, max_pages + 1):
        url = f"{base_url}?page={page}" if page > 1 else base_url
        response = requests.get(url, headers=headers, timeout=10)

        if response.status_code == 404:
            break

        soup = BeautifulSoup(response.text, "html.parser")
        job_container = soup.select_one("#job_table, #job-list")

        if not job_container:
            break

        job_links = job_container.select("div > a[href^='/jobs/']")

        if not job_links:
            break

        for link in job_links:
            title_elem = link.select_one(".job-title .name")
            location_elem = link.select_one(".job-location")

            all_jobs.append({
                "title": title_elem.get_text(strip=True) if title_elem else None,
                "location": location_elem.get_text(strip=True) if location_elem else None,
                "url": f"https://{company_slug}.careerplug.com{link['href']}",
            })

        print(f"Page {page}: Found {len(job_links)} jobs (total: {len(all_jobs)})")
        time.sleep(1)  # Be respectful

    return all_jobs

jobs = fetch_all_jobs("bemobile")
print(f"Total jobs collected: {len(jobs)}")
Common issues
highCompany subdomain not found (404 error)

CareerPlug uses subdomain-based company identification. Verify the correct subdomain by checking the company's careers page URL or searching for patterns like '{company}.careerplug.com'.

critical/jobs.js endpoint is blocked by robots.txt

Do not use the /jobs.js endpoint as it is explicitly disallowed in robots.txt. Use the standard HTML /jobs page instead and parse the content with BeautifulSoup.

mediumJob URLs redirect to application pages

Direct job URLs (/jobs/{id}) redirect to application pages (/jobs/{id}/apps/new). The full job description is embedded in the HTML of the application page, so parse it from there.

lowLocation format requires parsing

CareerPlug formats locations as '{State}-{City}-{Zip}' (e.g., 'NH-Keene-03431'). Split the string by hyphens to extract individual components.

mediumDifferent HTML layouts per company

CareerPlug supports multiple themes. Use fallback selectors like '#job_table, #job-list' for containers and multiple description selectors to handle variations.

mediumNo JSON API available

CareerPlug has no REST or GraphQL API. All job data must be extracted via HTML parsing. The /jobs.js endpoint returns JavaScript code, not JSON.

Best practices
  1. 1Use the HTML /jobs endpoint instead of the blocked /jobs.js endpoint
  2. 2Implement fallback selectors to handle different CareerPlug themes (#job_table, #job-list)
  3. 3Add 1-2 second delays between requests to be respectful
  4. 4Parse the location format '{State}-{City}-{Zip}' for structured data
  5. 5Handle URL redirects when fetching individual job details
  6. 6Cache results as job boards typically update daily
  7. 7Use S3 sitemaps for company discovery if needed
Or skip the complexity

One endpoint. All CareerPlug jobs. No scraping, no sessions, no maintenance.

Get API access
cURL
curl "https://enterprise.jobo.world/api/jobs?sources=careerplug" \
  -H "X-Api-Key: YOUR_KEY"
Ready to integrate

Access CareerPlug
job data today.

One API call. Structured data. No scraping infrastructure to build or maintain — start with the free tier and scale as you grow.

99.9%API uptime
<200msAvg response
50M+Jobs processed