Kula Jobs API.
Recruiting platform designed for high-growth companies with a comprehensive REST API that returns full job data in a single request.
Try the API.
Test Jobs, Feed, and Auto-Apply endpoints against https://connect.jobo.world with live request/response examples, then copy ready-to-use curl commands.
What's in every response.
Data fields, real-world applications, and the companies already running on Kula.
- High-growth focus
- Full descriptions in single API call
- Structured location data
- Workplace type indicators
- Department information
- No authentication required
- 01High-growth company tracking
- 02Startup job monitoring
- 03Tech talent sourcing
- 04API-based job extraction
How to scrape Kula.
Step-by-step guide to extracting jobs from Kula-powered career pages—endpoints, authentication, and working code.
import re
def extract_account_name(url: str) -> str:
"""Extract account name from Kula career page URL."""
pattern = r'careers\.kula\.ai/([^/]+)'
match = re.search(pattern, url)
if match:
return match.group(1)
raise ValueError(f"Invalid Kula URL: {url}")
# Example usage
url = "https://careers.kula.ai/wizcommerce"
account_name = extract_account_name(url)
print(f"Account name: {account_name}") # Output: wizcommerceimport requests
def fetch_kula_jobs(account_name: str, page: int = 1, items: int = 99) -> dict:
"""Fetch jobs from Kula API."""
url = "https://careers.kula.ai/api/internal/ats_job_posts"
params = {
"accountName": account_name,
"page": page,
"type": "ats_job_post.index",
"items": items,
}
response = requests.get(url, params=params)
response.raise_for_status()
return response.json()
# Fetch jobs for a company
data = fetch_kula_jobs("wizcommerce")
print(f"Found {data['meta']['count']} jobs across {data['meta']['pages']} page(s)")def parse_job(job: dict, account_name: str) -> dict:
"""Parse a single job from Kula API response."""
ats_job = job.get("ats_job", {})
# Get primary location from offices array
offices = ats_job.get("offices", [])
location = offices[0].get("location", "") if offices else "Remote"
return {
"id": job.get("id"),
"title": job.get("title"),
"department": ats_job.get("ats_department", {}).get("name"),
"location": location,
"workplace_type": ats_job.get("workplace"), # office, remote, hybrid
"employment_type": ats_job.get("employment_type"),
"description_html": ats_job.get("job_description"),
"is_listed": job.get("listed", False),
"is_confidential": job.get("is_confidential", False),
"url": f"https://careers.kula.ai/{account_name}/{job.get('id')}/",
}
# Parse all jobs
for job in data.get("data", []):
parsed = parse_job(job, "wizcommerce")
print(f"{parsed['title']} - {parsed['location']}")def fetch_all_kula_jobs(account_name: str) -> list:
"""Fetch all jobs across all pages."""
all_jobs = []
page = 1
while True:
data = fetch_kula_jobs(account_name, page=page)
jobs = data.get("data", [])
all_jobs.extend(jobs)
meta = data.get("meta", {})
if page >= meta.get("pages", 1):
break
page += 1
return all_jobs
# Fetch all jobs
all_jobs = fetch_all_kula_jobs("wizcommerce")
print(f"Total jobs fetched: {len(all_jobs)}")def filter_active_jobs(jobs: list) -> list:
"""Filter to only active, public jobs."""
return [
job for job in jobs
if job.get("listed") is True and job.get("is_confidential") is False
]
# Filter jobs
all_jobs = fetch_all_kula_jobs("wizcommerce")
active_jobs = filter_active_jobs(all_jobs)
print(f"Active public jobs: {len(active_jobs)}")Verify the account name matches the URL path exactly. The account name is case-sensitive and must match the path in careers.kula.ai/{accountName}.
The company may have no active job listings, or the account name may be incorrect. Check the meta.count field and verify the company's career page URL.
Some jobs may have empty job_description fields. Always check for null/empty values and handle gracefully in your parsing logic.
The /api/internal/ats_job_posts endpoint is undocumented and may change. Monitor for changes and implement error handling for unexpected response structures.
Responses use UTF-8 with Unicode escape sequences (e.g., \u003c for <). Use proper JSON parsing which handles this automatically.
- 1Use the API endpoint instead of HTML scraping for reliable data extraction
- 2Request 99 items per page to minimize pagination requests
- 3Filter jobs where listed is false or is_confidential is true
- 4Handle multiple office locations by iterating the offices array
- 5Cache results as job boards typically update daily
- 6Validate the account name by checking if meta.count > 0 before full extraction
One endpoint. All Kula jobs. No scraping, no sessions, no maintenance.
Get API accesscurl "https://enterprise.jobo.world/api/jobs?sources=kula" \
-H "X-Api-Key: YOUR_KEY" Access Kula
job data today.
One API call. Structured data. No scraping infrastructure to build or maintain — start with the free tier and scale as you grow.