Transform job data before it hits your destination.
A visual node-based editor for reshaping, enriching, and augmenting job listings with Liquid templates, AI models, and JSON merging—per-job, every sync cycle.
Five steps to intelligent data.
Build a transformation graph in the visual editor, test it with real job data, then attach it to an outbound feed. Every job passes through your pipeline before hitting the destination.
Create a pipeline.
Open the visual editor and give your pipeline a name and description.
Add nodes.
Drop Liquid, AI, or JSON Merge nodes onto the canvas and connect them to the Input trigger.
Configure each node.
Write Liquid templates, set AI model and prompts, or choose merge strategy—each node has its own config panel.
Test with real data.
Load a sample job from your feed and run the pipeline. Inspect each node's output and timing.
Attach to a feed.
Select your pipeline from the outbound feed settings. It runs automatically on every sync cycle.
Five node types. Infinite combinations.
Each pipeline is a directed acyclic graph of nodes. Connect them in any order—branch, merge, chain—to build exactly the transformation you need.
Input / Trigger
Entry pointThe starting point of every pipeline. Receives the raw job JSON from the outbound feed and passes it downstream to connected nodes.
Liquid Template
TransformReshape and remap JSON fields using Liquid templating syntax. Fast, deterministic, and free—ideal for field renaming, date formatting, and filtering.
AI (OpenRouter)
EnrichSend data to any LLM—GPT-4o, Claude, Gemini, Llama—via OpenRouter. Extract skills, classify roles, generate SEO metadata, or rewrite descriptions.
JSON Merge
CombineMerge two JSON inputs (base + patch) into a single object. Deep or shallow strategy with configurable conflict resolution.
Output
Exit pointThe final node. The JSON from this node is what gets pushed to your destination. Every pipeline ends here.
Deterministic field remapping.
Reshape and remap JSON fields using Liquid templating syntax. The input data is available as {{ data.fieldName }}. Your template must output valid JSON.
Available variables
dataobjectThe full JSON object from the parent node. Access nested fields with dot notation: data.company.name
data.titlestringJob title.
data.descriptionstringFull job description (HTML).
data.companyobjectCompany object with name, url, and logo fields.
data.locationobjectLocation object with city, state, and country fields.
data.apply_urlstringDirect URL to the job application page.
{
"jobTitle": "{{ data.title }}",
"employer": "{{ data.company.name }}",
"location": "{{ data.location.city }}, {{ data.location.state }}",
"applyUrl": "{{ data.apply_url }}",
"postedAt": "{{ data.published_at | date: '%Y-%m-%d' }}"
}LLM-powered enrichment.
Send job data to any model on OpenRouter—GPT-4o, Claude, Gemini, Llama, and more. Extract skills, classify seniority, generate SEO metadata, or rewrite descriptions in seconds.
Configuration
modelstringrequiredOpenRouter model ID, e.g. openai/gpt-4o-mini.
systemPromptstringrequiredInstructions for the AI model. Should include "respond with valid JSON".
userPromptstringrequiredThe prompt template. Use {{input}} to inject the parent node's output.
outputSchemastringOptional JSON schema to constrain the AI's output structure.
You are a job data extraction assistant. Always respond with valid JSON. Extract technical skills, soft skills, and seniority level from the job posting.Extract skills and seniority from this
job posting:
{{input}}
Respond with:
{
"technical_skills": ["skill1", "skill2"],
"soft_skills": ["skill1"],
"seniority": "senior" | "mid" | "junior"
}Combine two inputs into one.
The JSON Merge node has two input handles—base and patch. It merges them into a single JSON object, preserving original job data while adding AI-generated fields.
Configuration
strategyenumDeep — recursively merges nested objects. Shallow — only merges top-level keys. Default: Deep.
conflictResolutionenumPatchWins — patch values override base on conflict. BaseWins — base values are preserved. Default: PatchWins.
{ "title": "Software Engineer", "company": { "name": "Acme" }, "location": "San Francisco" }{ "skills": ["Python", "AWS"], "seniority": "senior", "salary_estimate": "$150k-$180k" }{ "title": "Software Engineer", "company": { "name": "Acme" }, "location": "San Francisco", "skills": ["Python", "AWS"], "seniority": "senior", "salary_estimate": "$150k-$180k" }Common patterns, ready to use.
Each uses the same core pattern: Input → Transform → Merge → Output. Start from a template in the pipeline editor or build your own from scratch.
Extract Requirements
Pull structured requirements and responsibilities from free-text job descriptions using AI, then merge them back into the original job object.
Generate SEO Metadata
Generate seoTitle, seoDescription, seoSlug, seoKeywords, and a structuredData snippet for search engine optimization.
Remap for ATS Feed
Rename and restructure fields to match your internal ATS schema using Liquid templates. No AI cost, instant execution.
Classify Seniority
Use an AI node to classify job postings into junior, mid, senior, or lead seniority levels based on title and description.
Rewrite Descriptions
Rewrite job descriptions to match your brand voice, tone, and formatting standards using an AI node.
Salary Normalization
Extract and normalize salary ranges from unstructured descriptions into consistent min/max/currency/period fields.
Pre-built for common tasks.
The template picker offers ready-made pipelines organized by category. Pick one, customize the prompts, and attach it to your feed.
Extract
Pull structured data from unstructured job descriptions. Extract skills, requirements, qualifications, and more.
Rewrite
Transform content with AI. Rewrite descriptions, translate text, adjust tone, or summarize long postings.
Enrich
Add new fields to job records. Generate SEO metadata, salary estimates, seniority classifications, and department tags.
Format
Reshape JSON structure without AI. Rename fields, format dates, filter properties, and restructure nested objects.
Attach to an outbound feed.
Pipelines don't run in isolation—they're attached to outbound feeds. When a feed syncs, each job passes through the pipeline before being written to the destination.
Create your pipeline
Build and test your pipeline in the visual editor. Verify each node's output with real job data.
Open your outbound feed
Navigate to the outbound feed page and select the feed you want to transform.
Select a pipeline
In the feed settings, choose your pipeline from the dropdown. It runs automatically on every sync cycle.
One pipeline per feed
Each outbound feed can have at most one pipeline attached. To apply different transformations, create separate feeds with different pipelines.
DAG-based, per-job processing.
Pipelines execute using topological sort (Kahn's algorithm). Each node runs only after all its input dependencies have completed.
Per-job execution
Each job in the feed runs through the pipeline independently. Failures on one job don't affect others.
Node-level results
Each node produces an output JSON and timing info. Use the test panel to inspect individual node results.
Error handling
If a node fails, the pipeline stops for that job and reports the error. The job is skipped in the destination push.
Idempotent
Running the same job through the same pipeline produces the same output, assuming deterministic AI settings.
Best practices.
Follow these guidelines to build reliable, cost-effective transformation pipelines.
Always test with real data.
Use the test panel to load a real job from your feed and verify each node's output before attaching to a live feed.
Use JSON Merge to preserve original data.
Never send all data through AI and use the raw output. Branch from Input to both AI and Merge so original fields are always preserved.
Keep AI prompts focused.
Ask the AI to extract or transform specific fields rather than processing the entire job object. This reduces cost and improves reliability.
Use Liquid for deterministic transforms.
For simple field renaming, date formatting, or filtering—use Liquid instead of AI. It's faster, cheaper, and 100% deterministic.
Start from templates.
The template picker offers pre-built pipelines for common use cases. Start there and customize rather than building from scratch.
Frequently asked.
Common questions about transformation pipelines, node types, and execution.
Can I use multiple AI nodes in one pipeline?
Yes. You can chain AI nodes or run them in parallel branches. Each AI node can use a different model and prompt.
What happens if the AI model is slow or times out?
AI nodes have a configurable timeout. If the model doesn't respond in time, the node fails and the job is skipped for that sync cycle. It'll be retried on the next cycle.
Do pipelines add latency to my feed sync?
Liquid and JSON Merge nodes are near-instant. AI nodes add latency proportional to the model's response time (typically 1–5 seconds per job). Jobs are processed in parallel batches.
Can I use pipelines without an outbound feed?
Pipelines are designed to work with outbound feeds. However, you can test them standalone using the test panel in the pipeline editor.
Is there a limit on the number of nodes?
There's no hard limit, but we recommend keeping pipelines under 10 nodes for maintainability. Complex logic can often be simplified with better Liquid templates or more focused AI prompts.
What AI models are supported?
Any model available on OpenRouter—including GPT-4o, GPT-4o-mini, Claude 3.5 Sonnet, Gemini Pro, Llama 3, and hundreds more. You can choose different models for different AI nodes.
Ready to transform your job data?
Build your first pipeline in the visual editor. Start from a template or wire nodes from scratch.