All platforms

Jobvite Jobs API.

End-to-end talent acquisition suite used by mid-market and enterprise companies.

Jobvite
Live
150K+jobs indexed monthly
<3haverage discovery time
1hrefresh interval
Companies using Jobvite
NutanixSchneider ElectricLogitechZillowIngram Micro
Developer tools

Try the API.

Test Jobs, Feed, and Auto-Apply endpoints against https://connect.jobo.world with live request/response examples, then copy ready-to-use curl commands.

What's in every response.

Data fields, real-world applications, and the companies already running on Jobvite.

Data fields
  • Mid-market coverage
  • Detailed listings
  • Company data
  • Requirements info
  • Application details
Use cases
  1. 01Enterprise job monitoring
  2. 02Mid-market company tracking
  3. 03Competitive talent intelligence
  4. 04Career page aggregation
Trusted by
NutanixSchneider ElectricLogitechZillowIngram Micro
DIY GUIDE

How to scrape Jobvite.

Step-by-step guide to extracting jobs from Jobvite-powered career pages—endpoints, authentication, and working code.

HTMLintermediateNo official limit - be respectful with request frequency and add delaysNo auth

Fetch the job listings page

Request the company's main job listings page from Jobvite. All job data is server-side rendered HTML with no JSON API available.

Step 1: Fetch the job listings page
import requests
from bs4 import BeautifulSoup

company_slug = "nutanix"
url = f"https://jobs.jobvite.com/{company_slug}"

response = requests.get(url, timeout=10)
response.raise_for_status()

soup = BeautifulSoup(response.text, "html.parser")
print(f"Page loaded: {len(response.text)} bytes")

Extract job links from the listings page

Parse the HTML to find all job links. Jobs are organized by category with links following the pattern /{company}/job/{jobId} where jobId is a 9-character alphanumeric string.

Step 2: Extract job links from the listings page
import re

# Find all job links on the page
job_links = soup.find_all("a", href=re.compile(r"/job/[a-z0-9]{9}"))

jobs = []
for link in job_links:
    href = link.get("href", "")
    # Extract job ID from URL like "/nutanix/job/ow6lzfwg"
    match = re.search(r"/job/([a-z0-9]{9})", href)
    if match:
        job_id = match.group(1)
        jobs.append({
            "id": job_id,
            "title": link.get_text(strip=True),
            "url": f"https://jobs.jobvite.com{href}"
        })

print(f"Found {len(jobs)} jobs")
for job in jobs[:5]:
    print(f"  - {job['title']} ({job['id']})")

Handle pagination for large job boards

Companies with many jobs use paginated search results. Navigate through pages until no more jobs are found.

Step 3: Handle pagination for large job boards
import time

def get_all_jobs(company_slug: str) -> list:
    base_url = f"https://jobs.jobvite.com/{company_slug}/search"
    all_jobs = []
    page = 0

    while True:
        url = f"{base_url}/?p={page}"
        response = requests.get(url, timeout=10)
        soup = BeautifulSoup(response.text, "html.parser")

        # Find job links on this page
        job_links = soup.find_all("a", href=re.compile(r"/job/[a-z0-9]{9}"))

        if not job_links:
            break

        for link in job_links:
            href = link.get("href", "")
            match = re.search(r"/job/([a-z0-9]{9})", href)
            if match:
                all_jobs.append({
                    "id": match.group(1),
                    "title": link.get_text(strip=True),
                    "url": f"https://jobs.jobvite.com{href}"
                })

        print(f"Page {page}: found {len(job_links)} jobs")
        page += 1
        time.sleep(0.5)  # Be respectful

    return all_jobs

Extract job details from individual pages

Visit each job detail page to get the full description, category, location, and requisition number using the consistent HTML structure.

Step 4: Extract job details from individual pages
def get_job_details(company_slug: str, job_id: str) -> dict:
    url = f"https://jobs.jobvite.com/{company_slug}/job/{job_id}"
    response = requests.get(url, timeout=10)
    soup = BeautifulSoup(response.text, "html.parser")

    # Extract structured data from known CSS classes
    title_elem = soup.find("h2", class_="jv-header")
    meta_elem = soup.find("p", class_="jv-job-detail-meta")
    desc_elem = soup.find("div", class_="jv-job-detail-description")

    job = {
        "id": job_id,
        "url": url,
        "title": title_elem.get_text(strip=True) if title_elem else None,
        "description_html": str(desc_elem) if desc_elem else None,
    }

    # Parse metadata (category, location, req number)
    if meta_elem:
        meta_text = meta_elem.get_text(separator="|", strip=True)
        parts = [p.strip() for p in meta_text.split("|") if p.strip()]
        if len(parts) >= 1:
            job["category"] = parts[0]
        if len(parts) >= 2:
            job["location"] = parts[1]
        if len(parts) >= 3:
            job["requisition_number"] = parts[2]

    return job

# Example usage
job = get_job_details("nutanix", "ow6lzfwg")
print(f"Title: {job['title']}")
print(f"Location: {job.get('location')}")

Use the facets API for filter options

While Jobvite has no job listings API, the facets endpoint provides filter options like locations, categories, and departments for building search interfaces.

Step 5: Use the facets API for filter options
def get_facets(company_slug: str, location: str = None) -> dict:
    url = f"https://jobs.jobvite.com/{company_slug}/search/facets"
    params = {"nl": 1}

    if location:
        params["l"] = location

    response = requests.get(url, params=params, timeout=10)
    response.raise_for_status()

    data = response.json()
    return data.get("facets", {})

# Get all available filters
facets = get_facets("nutanix")
print("Available locations:", len(facets.get("locations", [])))
print("Available categories:", len(facets.get("categories", [])))
print("Available departments:", len(facets.get("departments", [])))
Common issues
criticalNo JSON API available for job listings

Jobvite uses server-side HTML rendering only. You must parse HTML responses using BeautifulSoup or similar libraries. The /search/facets endpoint provides filter options but not job data.

mediumCustom domains have different URL patterns

Some companies use custom domains (e.g., careers.company.com) instead of jobs.jobvite.com. Check the actual career page URL and adapt your scraper accordingly. The HTML structure is usually consistent.

lowFeatured jobs appear at the top of listings

Featured jobs are included in the main listing but appear first. Deduplicate by job ID if you need a clean list, or track featured status separately.

lowJob IDs are case-sensitive lowercase

Job IDs are always 9-character lowercase alphanumeric strings. Ensure your regex pattern matches only lowercase: [a-z0-9]{9}

mediumShow More links require navigation

Categories with many jobs show a 'Show More' link that navigates to the search page. Use the paginated search endpoint (/search/?p={page}) to get all jobs in a category.

Best practices
  1. 1Use the paginated search endpoint for companies with 50+ jobs
  2. 2Add delays between requests (500ms-1s) to be respectful
  3. 3Cache job listings - they typically update daily at most
  4. 4Parse the job detail meta section for category, location, and requisition number
  5. 5Use the facets API to discover available filter options before scraping
  6. 6Handle custom domains by detecting the URL structure before parsing
Or skip the complexity

One endpoint. All Jobvite jobs. No scraping, no sessions, no maintenance.

Get API access
cURL
curl "https://enterprise.jobo.world/api/jobs?sources=jobvite" \
  -H "X-Api-Key: YOUR_KEY"
Ready to integrate

Access Jobvite
job data today.

One API call. Structured data. No scraping infrastructure to build or maintain — start with the free tier and scale as you grow.

99.9%API uptime
<200msAvg response
50M+Jobs processed