Push job data directly to your infrastructure.
A continuous pipeline that syncs job listings to your databases and search engines every 15 minutes—with incremental updates, deduplication, and automatic retries.
Three steps to continuous sync.
Choose a destination, configure connection credentials and data filters, activate the feed—and Jobo handles scheduling, retries, and deduplication for you.
Choose a destination.
Pick where you want job data delivered—PostgreSQL, MongoDB, Elasticsearch, Algolia, or Meilisearch.
Configure & save.
Enter connection credentials and optional data filters. Your configuration is saved and can be edited anytime.
Activate & sync.
Activate the feed to start pushing data. Jobo handles scheduling, retries, and deduplication—every 15 minutes, automatically.
Five destinations. Zero maintenance.
Push to databases and search engines used by thousands of engineering teams. Credentials are encrypted with AES-256 at rest—never stored in plain text.
PostgreSQL
DatabaseSync jobs into structured tables with full SQL query support. SSL and automatic table creation.
MongoDB
DatabasePush job listings as rich JSON documents. Supports MongoDB Atlas and self-hosted instances.
Elasticsearch
Search EngineIndex job listings for full-text search and analytics. Basic auth, API key, and unauthenticated.
Algolia
Search EngineInstant search with typo tolerance, faceting, and geo-search. Auto-creates the index.
Meilisearch
Search EngineLightning-fast open-source search. Works with cloud-hosted and self-hosted instances.
Don't see your destination? Request one
Connect in minutes.
Each destination is configured through the dashboard with validated fields. Credentials are encrypted before storage—never visible in logs.
PostgreSQL fields
hoststringrequiredDatabase server hostname or IP address.
portnumberrequiredConnection port. Defaults to 5432.
databasestringrequiredTarget database name.
schemastringTarget schema. Defaults to "public".
usernamestringrequiredDatabase user with write access.
passwordstringrequiredUser password. Encrypted at rest with AES-256.
ssl_modeselectSSL mode: disable, allow, prefer, require, verify-ca, verify-full.
{
"host": "db.example.com",
"port": 5432,
"database": "jobs_db",
"schema": "public",
"username": "jobo_sync",
"password": "••••••••",
"ssl_mode": "require"
}Built for reliability.
The sync engine runs on a 15-minute interval with incremental updates, automatic retries, and deduplication—so your data stays fresh without intervention.
Incremental sync
Each cycle only processes new and updated records. No full re-syncs unless you switch destinations or activate fresh.
Deduplication
Jobs are deduplicated by unique ID. Existing records are updated in place (upsert)—no duplicates, no stale data.
Automatic retries
Failed syncs retry with exponential backoff—1 min, 5 min, 15 min, 1 hour. You're notified of persistent failures.
Full backfill
Activate or switch destinations and a full backfill runs. May take several minutes depending on data volume.
Expiration handling
Expired jobs aren't removed automatically. Use the Expired Jobs API to detect stale records and clean up.
Schema mapping
Every destination gets the same 22-field schema. Records map automatically to tables, documents, or search indices.
Push only the data you need.
Each configuration includes optional filters so only relevant jobs are synced. Narrow by location, ATS source, remote status, or posting date.
Filter parameters
locationsobject[]Filter by location. Each object supports country, region, and city. Multiple locations use OR matching.
sourcesstring[]Filter by ATS source (e.g., "greenhouse", "lever"). All sources included by default.
isRemotebooleanWhen enabled, only remote positions are synced.
postedAfterdateOnly sync jobs posted after this date (ISO 8601).
{
"locations": [
{ "country": "US", "region": "California" },
{ "country": "GB", "city": "London" }
],
"sources": ["greenhouse", "lever", "workday"],
"isRemote": true,
"postedAfter": "2025-01-01"
}22 fields per record.
Every job record pushed to your destination follows a consistent schema—structured for search, analytics, and integration with your existing systems.
Identity
idstringtitlestringcompany_namestringcompany_idstringContent
descriptionstringlisting_urlstringapply_urlstringLocation
citystringstatestringcountrystringis_remotebooleanCompensation
salary_minnumbersalary_maxnumbersalary_currencystringsalary_periodstringClassification
employment_typestringworkplace_typestringexperience_levelstringsourcestringTimestamps
date_posteddatetimecreated_atdatetimeupdated_atdatetime{
"id": "abc123-def456",
"title": "Senior Frontend Engineer",
"company_name": "Acme Corp",
"company_id": "acme-corp",
"description": "We're looking for a Senior Frontend Engineer...",
"listing_url": "https://boards.greenhouse.io/acme/jobs/123",
"apply_url": "https://boards.greenhouse.io/acme/jobs/123#app",
"city": "San Francisco",
"state": "California",
"country": "US",
"salary_min": 150000,
"salary_max": 200000,
"salary_currency": "USD",
"salary_period": "Year",
"employment_type": "Full-time",
"workplace_type": "Hybrid",
"experience_level": "Senior",
"is_remote": false,
"source": "greenhouse",
"date_posted": "2025-02-10T08:00:00Z",
"created_at": "2025-02-10T09:15:00Z",
"updated_at": "2025-02-12T14:30:00Z"
}Enterprise-grade protection.
Credentials are encrypted before storage and data transfers use TLS 1.3. Your connection details never appear in logs or error reports.
AES-256 at rest
All credentials are encrypted with AES-256 before storage. Never stored in plain text or included in logs.
TLS 1.3 in transit
All data transfers between Jobo and your destination use TLS 1.3 with the strongest available cipher suites.
Minimal permissions
We recommend a dedicated user with write-only access to the target table, index, or collection.
Quotas & rate limits.
Designed for high-volume production use with generous defaults and automatic retry handling.
45+ ATS sources.
Filter your outbound feed by any combination of applicant tracking systems. All sources are included by default.
Frequently asked.
Common questions about the outbound feed, destination configuration, and sync behavior.
Can I have multiple active feeds?
Currently, only one feed can be active at a time. You can save multiple configurations and switch between them instantly. Switching triggers a full backfill to the new destination.
What happens when I switch destinations?
The previous feed is deactivated and the new one is activated. The new destination receives a full backfill on first sync. Data in the previous destination is not deleted.
Are expired jobs removed from my destination?
No. The outbound feed only pushes new and updated jobs. To handle expirations, use the Expired Jobs API endpoint to get recently expired job IDs and remove them from your destination.
How do I test my connection before activating?
When you save a configuration, the connection is validated automatically. If it fails, you'll see an error with details. You can save without activating to test later.
What's the difference between Outbound Feed and Export?
Export is a one-time download in CSV, JSON, or Parquet. Outbound Feed is a continuous pipeline that keeps your destination in sync every 15 minutes. Use Export for ad-hoc analysis; use Outbound Feed for production integrations.
Do I need a subscription?
Activating a feed requires an active Jobs Data subscription. You can configure, save, and test destinations without one—activation is the only step that requires a plan.
Ready to pipe your job data?
Configure your first destination in minutes. No code, no cron jobs, no maintenance.