All platforms

Pinpoint Jobs API.

Modern ATS with employer branding and candidate assessment tools, providing comprehensive job data via a simple JSON API.

Pinpoint
Live
25K+jobs indexed monthly
<3haverage discovery time
1hrefresh interval
Companies using Pinpoint
Tripledot StudiosYNABSKIMS
Developer tools

Try the API.

Test Jobs, Feed, and Auto-Apply endpoints against https://connect.jobo.world with live request/response examples, then copy ready-to-use curl commands.

What's in every response.

Data fields, real-world applications, and the companies already running on Pinpoint.

Data fields
  • Employer branding
  • Candidate assessments
  • Analytics
  • Team collaboration
  • Workflow automation
  • Full descriptions in API
  • Compensation data
Use cases
  1. 01Gaming company job tracking
  2. 02Startup job monitoring
  3. 03Full description extraction
  4. 04Compensation data analysis
Trusted by
Tripledot StudiosYNABSKIMS
DIY GUIDE

How to scrape Pinpoint.

Step-by-step guide to extracting jobs from Pinpoint-powered career pages—endpoints, authentication, and working code.

RESTbeginnerNo rate limiting observedNo auth

Fetch all job listings from the API

Use the postings.json endpoint to retrieve all active jobs with full descriptions in a single API call. No authentication is required.

Step 1: Fetch all job listings from the API
import requests

company_slug = "tripledotstudios"
url = f"https://{company_slug}.pinpointhq.com/postings.json"

headers = {
    "Accept": "application/json, text/javascript, */*; q=0.01",
    "X-Requested-With": "XMLHttpRequest"
}

response = requests.get(url, headers=headers)
data = response.json()

jobs = data.get("data", [])
print(f"Found {len(jobs)} active jobs")

Parse job details from the response

Extract the fields you need from each job object. The API returns full HTML descriptions, compensation data, department info, and location details.

Step 2: Parse job details from the response
for job in jobs:
    # Extract location from nested structure
    location = job.get("job", {}).get("location", {})
    city = location.get("city", {}).get("name", "")
    province = location.get("province", {}).get("name", "")
    location_str = f"{city}, {province}" if city else province

    # Extract compensation if visible
    comp = job.get("compensation", {})
    salary = None
    if comp.get("visible"):
        salary = f"{comp.get('currency')} {comp.get('minimum')}-{comp.get('maximum')} {comp.get('frequency')}"

    print({
        "id": job["id"],
        "title": job["title"],
        "department": job.get("job", {}).get("department", {}).get("name"),
        "location": location_str,
        "workplace_type": job.get("workplace_type_text"),
        "employment_type": job.get("employment_type_text"),
        "url": job.get("url"),
        "salary": salary,
        "deadline": job.get("deadline_at"),
    })

Assemble the complete job description

Pinpoint separates description content into multiple HTML fields. Combine them to get the full job posting content.

Step 3: Assemble the complete job description
def build_full_description(job: dict) -> str:
    """Combine all description sections into full HTML."""
    sections = [
        job.get("description", ""),
        job.get("key_responsibilities", ""),
        job.get("skills_knowledge_expertise", ""),
        job.get("benefits", ""),
    ]
    return "\n".join(s for s in sections if s)

for job in jobs:
    full_description = build_full_description(job)
    print(f"{job['title']}: {len(full_description)} chars")

Handle errors and validate responses

Add error handling for network issues and invalid company slugs. The API returns an empty data array for companies without active postings.

Step 4: Handle errors and validate responses
import requests
from requests.exceptions import RequestException

def fetch_pinpoint_jobs(company_slug: str) -> list[dict]:
    url = f"https://{company_slug}.pinpointhq.com/postings.json"
    headers = {
        "Accept": "application/json",
        "X-Requested-With": "XMLHttpRequest"
    }

    try:
        response = requests.get(url, headers=headers, timeout=10)
        response.raise_for_status()
        data = response.json()
        return data.get("data", [])
    except RequestException as e:
        print(f"Error fetching jobs for {company_slug}: {e}")
        return []
    except ValueError as e:
        print(f"Invalid JSON response for {company_slug}: {e}")
        return []

# Usage
jobs = fetch_pinpoint_jobs("tripledotstudios")
print(f"Retrieved {len(jobs)} jobs")
Common issues
highCompany subdomain not found (404 error)

Verify the exact company slug from the careers page URL. Some companies may use different subdomain formats than expected.

lowMissing compensation data

Not all companies expose salary information. Check the compensation.visible field before accessing min/max values and handle null gracefully.

lowEmpty data array returned

The company may have no active job postings. This is valid - check if data array is empty rather than treating it as an error.

lowDescription HTML contains relative URLs

When parsing description HTML, convert any relative URLs to absolute using the base URL of the job board.

mediumLocation data structure varies

Location fields (city, province) may be null for remote positions. Always use .get() with defaults when accessing nested location objects.

Best practices
  1. 1Use the postings.json endpoint for simple, single-request job extraction
  2. 2Combine description, responsibilities, skills, and benefits fields for complete content
  3. 3Add cache-busting query parameters to avoid stale responses
  4. 4Always check compensation.visible before displaying salary data
  5. 5Handle nested location objects gracefully with null checks
  6. 6Use the sitemap.xml for discovering new job postings on a schedule
Or skip the complexity

One endpoint. All Pinpoint jobs. No scraping, no sessions, no maintenance.

Get API access
cURL
curl "https://enterprise.jobo.world/api/jobs?sources=pinpoint" \
  -H "X-Api-Key: YOUR_KEY"
Ready to integrate

Access Pinpoint
job data today.

One API call. Structured data. No scraping infrastructure to build or maintain — start with the free tier and scale as you grow.

99.9%API uptime
<200msAvg response
50M+Jobs processed