All platforms

Teamtailor Jobs API.

Employer branding and ATS platform popular with European companies and modern startups.

Teamtailor
Live
80K+jobs indexed monthly
<3haverage discovery time
1hrefresh interval
Companies using Teamtailor
PolestarKlarnaSpotifySkypeTruecaller
Developer tools

Try the API.

Test Jobs, Feed, and Auto-Apply endpoints against https://connect.jobo.world with live request/response examples, then copy ready-to-use curl commands.

What's in every response.

Data fields, real-world applications, and the companies already running on Teamtailor.

Data fields
  • Employer branding data
  • Culture information
  • Modern companies
  • European focus
  • Detailed listings
  • RSS feed support
  • Rich location data
Use cases
  1. 01European job market tracking
  2. 02Modern employer monitoring
  3. 03Startup ecosystem scraping
  4. 04Multi-location job aggregation
Trusted by
PolestarKlarnaSpotifySkypeTruecaller
DIY GUIDE

How to scrape Teamtailor.

Step-by-step guide to extracting jobs from Teamtailor-powered career pages—endpoints, authentication, and working code.

RESTbeginner~60 requests/minute (be respectful, 5-minute cache observed)No auth

Extract the company subdomain

Identify the company subdomain from the Teamtailor careers page URL. The subdomain is the first part before .teamtailor.com and is used to construct the RSS feed URL.

Step 1: Extract the company subdomain
from urllib.parse import urlparse

def extract_company_subdomain(url: str) -> str:
    """Extract the company subdomain from a Teamtailor URL."""
    parsed = urlparse(url)
    hostname = parsed.hostname or ""

    # Handle regional subdomains like .na.teamtailor.com
    if ".na.teamtailor.com" in hostname:
        return hostname.replace(".na.teamtailor.com", "")
    elif ".teamtailor.com" in hostname:
        return hostname.replace(".teamtailor.com", "")
    return hostname

# Examples
print(extract_company_subdomain("https://polestar.teamtailor.com/jobs"))  # polestar
print(extract_company_subdomain("https://klarna.teamtailor.com"))  # klarna

Fetch the RSS feed

Request the public RSS feed which contains all job listings with full descriptions, locations, and departments in a single request.

Step 2: Fetch the RSS feed
import requests

company = "polestar"
rss_url = f"https://{company}.teamtailor.com/jobs.rss"

headers = {
    "Accept": "application/rss+xml, application/xml, text/xml",
    "User-Agent": "JobScraper/1.0"
}

response = requests.get(rss_url, headers=headers, timeout=30)
response.raise_for_status()

rss_content = response.text

# Verify it's valid RSS
if "<rss" not in rss_content:
    print("Warning: RSS feed not available, may need HTML fallback")
else:
    print(f"Fetched {len(rss_content)} bytes of RSS data")

Parse the RSS XML feed

Parse the RSS 2.0 XML format to extract job details including title, description, URL, publication date, and remote status.

Step 3: Parse the RSS XML feed
import xml.etree.ElementTree as ET
from html import unescape
import re

def parse_rss_feed(rss_content: str) -> list[dict]:
    """Parse Teamtailor RSS feed and extract job listings."""
    root = ET.fromstring(rss_content)
    channel = root.find("channel")

    jobs = []
    for item in channel.findall("item"):
        # Extract basic fields
        title = item.findtext("title", "")
        description = item.findtext("description", "")
        link = item.findtext("link", "")
        pub_date = item.findtext("pubDate")

        # Custom Teamtailor fields
        remote_status = item.findtext("remoteStatus", "none")
        company_name = item.findtext("company_name", "")

        # Extract job ID from URL pattern: /jobs/{id}-{slug}
        id_match = re.search(r"/jobs/(\d+)", link)
        job_id = id_match.group(1) if id_match else None

        # Decode HTML entities in description
        description = unescape(description)

        jobs.append({
            "id": job_id,
            "title": title,
            "url": link,
            "description_html": description,
            "published_at": pub_date,
            "remote_status": remote_status,
            "company": company_name,
        })

    return jobs

jobs = parse_rss_feed(rss_content)
print(f"Found {len(jobs)} jobs")

Extract locations and departments from namespace

Teamtailor uses a custom XML namespace for extended location and department data. Extract structured location information including city, country, and full address.

Step 4: Extract locations and departments from namespace
def extract_namespaced_data(item: ET.Element) -> dict:
    """Extract location and department data from Teamtailor namespace."""
    TT_NS = "{https://teamtailor.com/locations}"

    # Extract department
    department_elem = item.find(f"{TT_NS}department")
    department = department_elem.text.strip() if department_elem is not None and department_elem.text else None

    # Extract locations
    locations = []
    locations_elem = item.find(f"{TT_NS}locations")
    if locations_elem is not None:
        for loc in locations_elem.findall(f"{TT_NS}location"):
            name_elem = loc.find(f"{TT_NS}name")
            city_elem = loc.find(f"{TT_NS}city")
            country_elem = loc.find(f"{TT_NS}country")

            if name_elem is not None and name_elem.text:
                locations.append(name_elem.text.strip())
            elif city_elem is not None or country_elem is not None:
                city = city_elem.text.strip() if city_elem is not None and city_elem.text else ""
                country = country_elem.text.strip() if country_elem is not None and country_elem.text else ""
                if city or country:
                    locations.append(f"{city}, {country}".strip(", "))

    return {
        "department": department,
        "locations": locations,
    }

# Usage in parsing loop
for job in jobs:
    item = channel.find(f".//item[link='{job['url']}']")
    if item is not None:
        extra_data = extract_namespaced_data(item)
        job["department"] = extra_data["department"]
        job["locations"] = extra_data["locations"]

Handle rate limiting and errors

Add proper error handling and rate limiting to ensure reliable scraping. RSS feeds are typically cached, so delays between requests are recommended.

Step 5: Handle rate limiting and errors
import time
from typing import Optional

def fetch_teamtailor_jobs(company: str, max_retries: int = 3) -> Optional[list[dict]]:
    """Fetch all jobs from a Teamtailor company with error handling."""
    rss_url = f"https://{company}.teamtailor.com/jobs.rss"

    for attempt in range(max_retries):
        try:
            response = requests.get(
                rss_url,
                headers={"User-Agent": "JobScraper/1.0"},
                timeout=30
            )
            response.raise_for_status()

            if "<rss" in response.text:
                return parse_rss_feed(response.text)
            else:
                print(f"Invalid RSS format for {company}")
                return None

        except requests.HTTPError as e:
            if e.response.status_code == 404:
                print(f"Company '{company}' not found")
                return None
            elif e.response.status_code == 429:
                wait_time = 2 ** attempt
                print(f"Rate limited, waiting {wait_time}s...")
                time.sleep(wait_time)
            else:
                print(f"HTTP error: {e}")
                return None

        except requests.RequestException as e:
            print(f"Request failed: {e}")
            if attempt < max_retries - 1:
                time.sleep(1)

    return None

# Batch fetch multiple companies with rate limiting
companies = ["polestar", "klarna", "spotify"]
for company in companies:
    jobs = fetch_teamtailor_jobs(company)
    if jobs:
        print(f"{company}: {len(jobs)} jobs")
    time.sleep(0.5)  # Be respectful between requests
Common issues
mediumRSS feed returns HTML instead of XML

Some companies disable the RSS feed. Implement HTML parsing as a fallback by fetching /jobs and parsing the job listing cards from the HTML using BeautifulSoup.

criticalAPI endpoint /api/v1/jobs returns 404

The JSON API endpoint is not publicly accessible. Always use the RSS feed at /jobs.rss which provides all job data including full descriptions.

mediumCompany subdomain not found (404 error)

Verify the company URL is correct. Some companies use custom domains that redirect to Teamtailor. Check the actual careers page URL and extract the subdomain from the redirect.

lowMissing location or department data

Location data uses the Teamtailor XML namespace (https://teamtailor.com/locations). Ensure your XML parser handles namespaces correctly by using the full namespace URI in queries.

lowDouble-encoded HTML in descriptions

Teamtailor escapes HTML entities like &lt;p&gt; in RSS. Use Python's html.unescape() function to decode the HTML entities properly.

lowRegional subdomains not recognized

Some companies use regional URLs like .na.teamtailor.com (North America). Handle these by checking for both .na.teamtailor.com and .teamtailor.com patterns when extracting the company slug.

Best practices
  1. 1Always try the RSS feed first - it's the most reliable and complete data source
  2. 2Implement HTML parsing as a fallback for companies with disabled RSS feeds
  3. 3Use html.unescape() to decode HTML entities in description fields
  4. 4Add 500ms-1s delay between requests to avoid rate limiting
  5. 5Cache results - job boards typically update daily at most
  6. 6Extract job IDs from the URL pattern /jobs/{id}-{slug} for deduplication
Or skip the complexity

One endpoint. All Teamtailor jobs. No scraping, no sessions, no maintenance.

Get API access
cURL
curl "https://enterprise.jobo.world/api/jobs?sources=teamtailor" \
  -H "X-Api-Key: YOUR_KEY"
Ready to integrate

Access Teamtailor
job data today.

One API call. Structured data. No scraping infrastructure to build or maintain — start with the free tier and scale as you grow.

99.9%API uptime
<200msAvg response
50M+Jobs processed