All platforms

iCIMS Jobs API.

Enterprise talent acquisition platform serving large organizations across industries.

iCIMS
Live
350K+jobs indexed monthly
<3haverage discovery time
1hrefresh interval
Companies using iCIMS
UPSTargetLowe'sComcastCVS Health
Developer tools

Try the API.

Test Jobs, Feed, and Auto-Apply endpoints against https://connect.jobo.world with live request/response examples, then copy ready-to-use curl commands.

What's in every response.

Data fields, real-world applications, and the companies already running on iCIMS.

Data fields
  • Multi-industry coverage
  • Enterprise job data
  • Detailed requirements
  • Location data
  • Job category classification
  • Sitemap discovery
Use cases
  1. 01Enterprise job boards
  2. 02Industry-specific aggregation
  3. 03Large employer tracking
Trusted by
UPSTargetLowe'sComcastCVS HealthBridge Core
DIY GUIDE

How to scrape iCIMS.

Step-by-step guide to extracting jobs from iCIMS-powered career pages—endpoints, authentication, and working code.

HTMLadvanced1 request per 2-3 seconds recommendedNo auth

Discover jobs via sitemap.xml

The most efficient way to discover all iCIMS jobs is through the sitemap.xml file. It contains all job URLs with lastmod timestamps in a single request.

Step 1: Discover jobs via sitemap.xml
import requests
import xml.etree.ElementTree as ET

company = "careers-bcore"
sitemap_url = f"https://{company}.icims.com/sitemap.xml"

response = requests.get(sitemap_url, timeout=30)
response.raise_for_status()

root = ET.fromstring(response.content)
namespaces = {'ns': 'http://www.sitemaps.org/schemas/sitemap/0.9'}

job_urls = []
for url in root.findall('ns:url', namespaces):
    loc = url.find('ns:loc', namespaces)
    lastmod = url.find('ns:lastmod', namespaces)
    if loc is not None and '/jobs/' in loc.text:
        job_urls.append({
            'url': loc.text,
            'lastmod': lastmod.text if lastmod is not None else None
        })

print(f"Found {len(job_urls)} job URLs in sitemap")

Parse job listings from HTML

If sitemap is unavailable, parse the paginated job listings page. Use the in_iframe=1 parameter to get cleaner HTML without the wrapper page.

Step 2: Parse job listings from HTML
import requests
from bs4 import BeautifulSoup
import re

company = "careers-bcore"
listings_url = f"https://{company}.icims.com/jobs/search?in_iframe=1"

headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36',
    'Accept': 'text/html',
}

response = requests.get(listings_url, headers=headers, timeout=30)
soup = BeautifulSoup(response.content, 'html.parser')

jobs = []
for link in soup.select('a.iCIMS_Anchor[href*="/jobs/"]'):
    href = link.get('href', '')
    title = link.get('title') or link.get_text(strip=True)

    # Extract job ID from URL pattern /jobs/{id}/
    job_id_match = re.search(r'/jobs/(d+)/', href)
    job_id = job_id_match.group(1) if job_id_match else None

    if job_id and title:
        jobs.append({
            'id': job_id,
            'title': title,
            'url': href
        })

print(f"Found {len(jobs)} jobs on listing page")

Fetch and parse job details

Each job has a detail page with full description. Add the in_iframe=1 parameter for cleaner HTML. Note: iCIMS does NOT provide JSON-LD data, so you must parse HTML directly.

Step 3: Fetch and parse job details
import requests
from bs4 import BeautifulSoup

job_url = "https://careers-bcore.icims.com/jobs/2931/applications-developer/job"
detail_url = f"{job_url}?in_iframe=1"

headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36',
}

response = requests.get(detail_url, headers=headers, timeout=30)
soup = BeautifulSoup(response.content, 'html.parser')

# Extract job details - iCIMS uses specific field labels
job_details = {
    'title': soup.find('h1').get_text(strip=True) if soup.find('h1') else None,
    'location': None,
    'job_id': None,
    'job_type': None,
    'description': None,
    'apply_url': None,
}

# Parse field labels (iCIMS uses text labels like "Job Locations:", "Job ID:", "Type:")
for element in soup.find_all(string=True):
    text = element.strip()
    if text.startswith('Job Locations'):
        next_text = element.find_next(string=True)
        if next_text:
            job_details['location'] = next_text.strip()
    elif text.startswith('Job ID'):
        next_text = element.find_next(string=True)
        if next_text:
            job_details['job_id'] = next_text.strip()
    elif text == 'Type':
        next_text = element.find_next(string=True)
        if next_text:
            job_details['job_type'] = next_text.strip()

# Extract full description from expandable containers
description_container = soup.select_one('.iCIMS_Expandable_Container')
if description_container:
    job_details['description'] = description_container.get_text(separator='\n', strip=True)

# Get apply URL
apply_link = soup.select_one('a[href*="mode=apply"]')
job_details['apply_url'] = apply_link.get('href') if apply_link else None

print(job_details)

Handle pagination for HTML scraping

When using HTML scraping instead of sitemap, handle pagination using the 'pr' (page result) parameter. Pages start at 0.

Step 4: Handle pagination for HTML scraping
import requests
from bs4 import BeautifulSoup
import re
import time

company = "careers-bcore"
all_jobs = []
page = 0

while True:
    url = f"https://{company}.icims.com/jobs/search?pr={page}&in_iframe=1"

    headers = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36',
    }

    response = requests.get(url, headers=headers, timeout=30)
    soup = BeautifulSoup(response.content, 'html.parser')

    # Find job links on this page
    job_links = soup.select('a.iCIMS_Anchor[href*="/jobs/"]')

    if not job_links:
        break

    for link in job_links:
        href = link.get('href', '')
        title = link.get('title') or link.get_text(strip=True)
        job_id_match = re.search(r'/jobs/(d+)/', href)

        if job_id_match:
            all_jobs.append({
                'id': job_id_match.group(1),
                'title': title,
                'url': href
            })

    print(f"Page {page}: {len(job_links)} jobs (total: {len(all_jobs)})")
    page += 1

    # Be respectful - add delay between requests
    time.sleep(2)

    # Safety limit
    if page > 100:
        break

print(f"Total jobs found: {len(all_jobs)}")

Check robots.txt for allowed paths

iCIMS robots.txt specifies which paths are disallowed. Always check it before scraping to identify the sitemap location and any restrictions.

Step 5: Check robots.txt for allowed paths
import requests

company = "careers-bcore"
robots_url = f"https://{company}.icims.com/robots.txt"

response = requests.get(robots_url, timeout=30)
robots_content = response.text

print("Robots.txt content:")
print(robots_content)

# Parse sitemap URL from robots.txt
sitemap_url = None
for line in robots_content.split('\n'):
    if line.lower().startswith('sitemap:'):
        sitemap_url = line.split(':', 1)[1].strip()
        print(f"\nSitemap found: {sitemap_url}")
        break

# Note disallowed paths
disallowed = []
for line in robots_content.split('\n'):
    if line.lower().startswith('disallow:'):
        path = line.split(':', 1)[1].strip()
        if path:
            disallowed.append(path)

print(f"\nDisallowed paths: {disallowed}")
Common issues
criticalNo JSON API or embedded data available

iCIMS is a traditional server-side rendered application with no public JSON API. Job detail pages do NOT contain JSON-LD or window.__INITIAL_STATE__. You must parse HTML directly using BeautifulSoup.

highCompany subdomain varies between deployments

iCIMS URLs use patterns like careers-{company}.icims.com, careers.{company}.icims.com, or custom domains. Detect the correct URL format by checking redirects or inspecting the company's careers page.

mediumIframe wrapper requires in_iframe=1 parameter

The main careers page loads content in an iframe. Always add ?in_iframe=1 to URLs to get just the job content HTML without the wrapper.

highCloudflare or bot detection blocks requests

Use proper headers (User-Agent, Accept), add delays between requests (2-3 seconds), and consider residential proxies for large-scale scraping. Some iCIMS deployments use Cloudflare protection.

mediumHTML structure varies by company portal

CSS class names use iCIMS_ prefix but structure can vary. Use multiple selector fallbacks and test against your specific target company's portal.

lowMobile redirect based on user agent

Use a standard desktop User-Agent string to avoid being redirected to the mobile version which may have different HTML structure.

Best practices
  1. 1Use sitemap.xml for job discovery - it's the most efficient method
  2. 2Always add ?in_iframe=1 parameter for cleaner HTML responses
  3. 3Rate limit to 1 request per 2-3 seconds to avoid bot detection
  4. 4Use desktop User-Agent to avoid mobile redirect issues
  5. 5Check robots.txt first to find the sitemap URL and respect disallowed paths
  6. 6Cache sitemap results - job boards typically update daily
Or skip the complexity

One endpoint. All iCIMS jobs. No scraping, no sessions, no maintenance.

Get API access
cURL
curl "https://enterprise.jobo.world/api/jobs?sources=icims" \
  -H "X-Api-Key: YOUR_KEY"
Ready to integrate

Access iCIMS
job data today.

One API call. Structured data. No scraping infrastructure to build or maintain — start with the free tier and scale as you grow.

99.9%API uptime
<200msAvg response
50M+Jobs processed