All platforms

Kula Jobs API.

Recruiting platform designed for high-growth companies with a comprehensive REST API that returns full job data in a single request.

Kula
Live
20K+jobs indexed monthly
<3haverage discovery time
1hrefresh interval
Companies using Kula
WizCommerceCleverTap
Developer tools

Try the API.

Test Jobs, Feed, and Auto-Apply endpoints against https://connect.jobo.world with live request/response examples, then copy ready-to-use curl commands.

What's in every response.

Data fields, real-world applications, and the companies already running on Kula.

Data fields
  • High-growth focus
  • Full descriptions in single API call
  • Structured location data
  • Workplace type indicators
  • Department information
  • No authentication required
Use cases
  1. 01High-growth company tracking
  2. 02Startup job monitoring
  3. 03Tech talent sourcing
  4. 04API-based job extraction
Trusted by
WizCommerceCleverTap
DIY GUIDE

How to scrape Kula.

Step-by-step guide to extracting jobs from Kula-powered career pages—endpoints, authentication, and working code.

RESTbeginnerNo rate limiting observed during testingNo auth

Extract the account name from URL

Parse the company's Kula career page URL to extract the accountName parameter needed for API calls. All Kula job boards follow the pattern careers.kula.ai/{accountName}.

Step 1: Extract the account name from URL
import re

def extract_account_name(url: str) -> str:
    """Extract account name from Kula career page URL."""
    pattern = r'careers\.kula\.ai/([^/]+)'
    match = re.search(pattern, url)
    if match:
        return match.group(1)
    raise ValueError(f"Invalid Kula URL: {url}")

# Example usage
url = "https://careers.kula.ai/wizcommerce"
account_name = extract_account_name(url)
print(f"Account name: {account_name}")  # Output: wizcommerce

Fetch all job listings from the API

Call the Kula internal API endpoint with the account name to retrieve all active jobs. The API returns complete job data including full HTML descriptions in a single request.

Step 2: Fetch all job listings from the API
import requests

def fetch_kula_jobs(account_name: str, page: int = 1, items: int = 99) -> dict:
    """Fetch jobs from Kula API."""
    url = "https://careers.kula.ai/api/internal/ats_job_posts"
    params = {
        "accountName": account_name,
        "page": page,
        "type": "ats_job_post.index",
        "items": items,
    }

    response = requests.get(url, params=params)
    response.raise_for_status()
    return response.json()

# Fetch jobs for a company
data = fetch_kula_jobs("wizcommerce")
print(f"Found {data['meta']['count']} jobs across {data['meta']['pages']} page(s)")

Parse job details from the response

Extract the relevant fields from each job object. The response includes nested data for department, office locations, employment type, and workplace arrangement.

Step 3: Parse job details from the response
def parse_job(job: dict, account_name: str) -> dict:
    """Parse a single job from Kula API response."""
    ats_job = job.get("ats_job", {})

    # Get primary location from offices array
    offices = ats_job.get("offices", [])
    location = offices[0].get("location", "") if offices else "Remote"

    return {
        "id": job.get("id"),
        "title": job.get("title"),
        "department": ats_job.get("ats_department", {}).get("name"),
        "location": location,
        "workplace_type": ats_job.get("workplace"),  # office, remote, hybrid
        "employment_type": ats_job.get("employment_type"),
        "description_html": ats_job.get("job_description"),
        "is_listed": job.get("listed", False),
        "is_confidential": job.get("is_confidential", False),
        "url": f"https://careers.kula.ai/{account_name}/{job.get('id')}/",
    }

# Parse all jobs
for job in data.get("data", []):
    parsed = parse_job(job, "wizcommerce")
    print(f"{parsed['title']} - {parsed['location']}")

Handle pagination

Check the meta.pages field to determine if additional pages exist. Iterate through all pages to collect the complete job list.

Step 4: Handle pagination
def fetch_all_kula_jobs(account_name: str) -> list:
    """Fetch all jobs across all pages."""
    all_jobs = []
    page = 1

    while True:
        data = fetch_kula_jobs(account_name, page=page)
        jobs = data.get("data", [])
        all_jobs.extend(jobs)

        meta = data.get("meta", {})
        if page >= meta.get("pages", 1):
            break
        page += 1

    return all_jobs

# Fetch all jobs
all_jobs = fetch_all_kula_jobs("wizcommerce")
print(f"Total jobs fetched: {len(all_jobs)}")

Filter active and public jobs

Filter the job list to only include publicly listed, non-confidential positions. This ensures you only process jobs that are meant to be visible to external candidates.

Step 5: Filter active and public jobs
def filter_active_jobs(jobs: list) -> list:
    """Filter to only active, public jobs."""
    return [
        job for job in jobs
        if job.get("listed") is True and job.get("is_confidential") is False
    ]

# Filter jobs
all_jobs = fetch_all_kula_jobs("wizcommerce")
active_jobs = filter_active_jobs(all_jobs)
print(f"Active public jobs: {len(active_jobs)}")
Common issues
highAccount name not found (404 error)

Verify the account name matches the URL path exactly. The account name is case-sensitive and must match the path in careers.kula.ai/{accountName}.

mediumAPI returns empty data array

The company may have no active job listings, or the account name may be incorrect. Check the meta.count field and verify the company's career page URL.

lowMissing job description in response

Some jobs may have empty job_description fields. Always check for null/empty values and handle gracefully in your parsing logic.

highInternal API endpoint changes

The /api/internal/ats_job_posts endpoint is undocumented and may change. Monitor for changes and implement error handling for unexpected response structures.

lowUnicode encoding in descriptions

Responses use UTF-8 with Unicode escape sequences (e.g., \u003c for <). Use proper JSON parsing which handles this automatically.

Best practices
  1. 1Use the API endpoint instead of HTML scraping for reliable data extraction
  2. 2Request 99 items per page to minimize pagination requests
  3. 3Filter jobs where listed is false or is_confidential is true
  4. 4Handle multiple office locations by iterating the offices array
  5. 5Cache results as job boards typically update daily
  6. 6Validate the account name by checking if meta.count > 0 before full extraction
Or skip the complexity

One endpoint. All Kula jobs. No scraping, no sessions, no maintenance.

Get API access
cURL
curl "https://enterprise.jobo.world/api/jobs?sources=kula" \
  -H "X-Api-Key: YOUR_KEY"
Ready to integrate

Access Kula
job data today.

One API call. Structured data. No scraping infrastructure to build or maintain — start with the free tier and scale as you grow.

99.9%API uptime
<200msAvg response
50M+Jobs processed