All platforms

Paylocity Jobs API.

Cloud-based HR and payroll platform with integrated recruiting for mid-market employers.

Paylocity
Live
50K+jobs indexed monthly
<3haverage discovery time
1hrefresh interval
Companies using Paylocity
St Michael The Archangel High SchoolMid-market employers using Paylocity Recruiting
Developer tools

Try the API.

Test Jobs, Feed, and Auto-Apply endpoints against https://connect.jobo.world with live request/response examples, then copy ready-to-use curl commands.

What's in every response.

Data fields, real-world applications, and the companies already running on Paylocity.

Data fields
  • HR and payroll integration
  • Embedded JSON job data
  • Server-side rendering
  • Applicant tracking
  • Multi-location support
  • GUID-based company identification
Use cases
  1. 01Mid-market employer tracking
  2. 02HR platform job monitoring
  3. 03Payroll-integrated recruiting analysis
  4. 04Multi-location workforce monitoring
Trusted by
St Michael The Archangel High SchoolMid-market employers using Paylocity Recruiting
DIY GUIDE

How to scrape Paylocity.

Step-by-step guide to extracting jobs from Paylocity-powered career pages—endpoints, authentication, and working code.

HTMLbeginnerUndocumented - use conservative delays (1-2 seconds between requests)No auth

Understand the URL structure

Paylocity uses company-specific GUIDs in URLs. The listings page follows this pattern: /recruiting/jobs/All/{tenantId}/Available-Positions. The tenantId is a UUID that uniquely identifies each company.

Step 1: Understand the URL structure
import re

# Paylocity URL patterns
listing_url = "https://recruiting.paylocity.com/recruiting/jobs/All/b181f77f-0432-453f-b229-869d786bb46c/Available-Positions"
detail_url = "https://recruiting.paylocity.com/Recruiting/Jobs/Details/3898273"

# Extract tenant GUID from URL
guid_pattern = r'[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}'
guid_match = re.search(guid_pattern, listing_url)
tenant_guid = guid_match.group(0) if guid_match else None
print(f"Tenant GUID: {tenant_guid}")

Fetch the listings page and extract embedded JSON

Paylocity embeds complete job listings as JSON in window.pageData. This is the recommended approach as it provides structured data without HTML parsing. Use regex to extract the JSON from the script tag.

Step 2: Fetch the listings page and extract embedded JSON
import requests
import re
import json

tenant_guid = "b181f77f-0432-453f-b229-869d786bb46c"
listing_url = f"https://recruiting.paylocity.com/recruiting/jobs/All/{tenant_guid}/Available-Positions"

response = requests.get(listing_url, timeout=30)
html = response.text

# Extract window.pageData JSON from the HTML
pattern = r'window\.pageData\s*=\s*(\{.*?\});\s*</script>'
match = re.search(pattern, html, re.DOTALL)

if match:
    json_str = match.group(1)
    page_data = json.loads(json_str)
    jobs = page_data.get("Jobs", [])
    print(f"Found {len(jobs)} jobs in embedded JSON")
else:
    print("No embedded pageData found - may need HTML parsing fallback")

Parse job data from window.pageData

The embedded JSON contains structured job objects with JobId, JobTitle, LocationName, PublishedDate, and full JobLocation details including address, city, state, and country.

Step 3: Parse job data from window.pageData
import json

# Assuming page_data was extracted from the previous step
jobs = page_data.get("Jobs", [])

for job in jobs[:3]:  # Show first 3 jobs
    job_info = {
        "job_id": job.get("JobId"),
        "title": job.get("JobTitle"),
        "location": job.get("LocationName"),
        "is_remote": job.get("IsRemote", False),
        "published_date": job.get("PublishedDate"),
        "department": job.get("HiringDepartment"),
        "is_internal": job.get("IsInternal", False),
    }

    # Full location details
    location = job.get("JobLocation", {})
    if location:
        job_info["address"] = location.get("Address")
        job_info["city"] = location.get("City")
        job_info["state"] = location.get("State")
        job_info["zip"] = location.get("Zip")
        job_info["country"] = location.get("Country")

    print(json.dumps(job_info, indent=2))

Fetch individual job details

For complete job descriptions, fetch the detail page for each job. The details page is server-rendered HTML without embedded JSON, so you will need to parse the HTML content.

Step 4: Fetch individual job details
import requests
from bs4 import BeautifulSoup

job_id = 3898273
detail_url = f"https://recruiting.paylocity.com/Recruiting/Jobs/Details/{job_id}"

response = requests.get(detail_url, timeout=30)
soup = BeautifulSoup(response.text, 'html.parser')

# Extract job details from HTML
title = soup.find('h1')
description = soup.find('main') or soup.find(class_='description')

# Look for location link (Google Maps)
location_link = soup.find('a', href=lambda x: x and 'maps.google.com' in x)

# Look for apply button
apply_link = soup.find('a', href=lambda x: x and '/Recruiting/Jobs/Apply/' in x)

job_details = {
    "job_id": job_id,
    "title": title.get_text(strip=True) if title else None,
    "location": location_link.get_text(strip=True) if location_link else None,
    "apply_url": apply_link['href'] if apply_link else None,
    "description_length": len(description.get_text()) if description else 0,
}

print(job_details)

Handle rate limiting and errors

Paylocity does not document rate limits. Use conservative delays between requests (1-2 seconds) and implement proper error handling with retries for failed requests.

Step 5: Handle rate limiting and errors
import requests
import time
import json
import re

def fetch_paylocity_jobs(tenant_guid: str, max_retries: int = 3) -> list:
    listing_url = f"https://recruiting.paylocity.com/recruiting/jobs/All/{tenant_guid}/Available-Positions"

    for attempt in range(max_retries):
        try:
            response = requests.get(
                listing_url,
                timeout=30,
                headers={"User-Agent": "Mozilla/5.0 (compatible; JobBot/1.0)"}
            )
            response.raise_for_status()

            # Extract embedded JSON
            pattern = r'window\.pageData\s*=\s*(\{.*?\});\s*</script>'
            match = re.search(pattern, response.text, re.DOTALL)

            if match:
                page_data = json.loads(match.group(1))
                return page_data.get("Jobs", [])
            return []

        except requests.RequestException as e:
            print(f"Attempt {attempt + 1} failed: {e}")
            if attempt < max_retries - 1:
                time.sleep(2 ** attempt)  # Exponential backoff
            else:
                raise

    return []

# Usage with rate limiting
tenant_guids = ["b181f77f-0432-453f-b229-869d786bb46c"]
for guid in tenant_guids:
    jobs = fetch_paylocity_jobs(guid)
    print(f"Retrieved {len(jobs)} jobs for tenant {guid}")
    time.sleep(1.5)  # Conservative delay between requests
Common issues
highCannot find company GUID from job detail URL

Job detail URLs only contain numeric job IDs. You must discover listing URLs through search engines (site:recruiting.paylocity.com), third-party databases like TheirStack, or existing company records.

mediumwindow.pageData JSON extraction fails

The regex pattern may vary across Paylocity versions. Try alternative patterns like window.pageData = ({.*?}); or fall back to HTML parsing using CSS selectors for job links.

mediumJob descriptions require separate detail page fetch

The embedded JSON only contains basic job metadata. For full descriptions, fetch each /Recruiting/Jobs/Details/{jobId} page and parse the HTML content.

mediumRate limiting or 429 errors

Paylocity rate limits are undocumented. Add 1-2 second delays between requests and implement exponential backoff for retries. Use a consistent session with cookies.

lowNo pagination - all jobs on single page

Paylocity shows all jobs on one page (up to 99+). No pagination handling is needed, but large job boards may take longer to render.

highInvalid or expired tenant GUID

Tenant GUIDs may change if a company migrates or reconfigures their Paylocity account. Verify the GUID is current by checking if the listing URL returns a 200 status.

Best practices
  1. 1Extract window.pageData JSON for structured job data instead of HTML parsing
  2. 2Cache company GUIDs - they are essential for accessing job listings
  3. 3Use 1-2 second delays between requests to avoid rate limiting
  4. 4Fetch detail pages only when you need full job descriptions
  5. 5Maintain session cookies across requests for consistent behavior
  6. 6Use search engines or TheirStack for company discovery - no public directory exists
Or skip the complexity

One endpoint. All Paylocity jobs. No scraping, no sessions, no maintenance.

Get API access
cURL
curl "https://enterprise.jobo.world/api/jobs?sources=paylocity" \
  -H "X-Api-Key: YOUR_KEY"
Ready to integrate

Access Paylocity
job data today.

One API call. Structured data. No scraping infrastructure to build or maintain — start with the free tier and scale as you grow.

99.9%API uptime
<200msAvg response
50M+Jobs processed