Back to home
Transformation Pipelines

Transform job data before it hits your destination.

A visual node-based editor for reshaping, enriching, and augmenting job listings with Liquid templates, AI models, and JSON merging—per-job, every sync cycle.

Pipeline Engine Live
5node types
4template categories
DAGexecution model
Per-jobprocessing

Five steps to intelligent data.

Build a transformation graph in the visual editor, test it with real job data, then attach it to an outbound feed. Every job passes through your pipeline before hitting the destination.

01

Create a pipeline.

Open the visual editor and give your pipeline a name and description.

Drag-and-drop node canvas
02

Add nodes.

Drop Liquid, AI, or JSON Merge nodes onto the canvas and connect them to the Input trigger.

5 node types available
03

Configure each node.

Write Liquid templates, set AI model and prompts, or choose merge strategy—each node has its own config panel.

Inline configuration panels
04

Test with real data.

Load a sample job from your feed and run the pipeline. Inspect each node's output and timing.

Node-level result inspection
05

Attach to a feed.

Select your pipeline from the outbound feed settings. It runs automatically on every sync cycle.

Zero-maintenance execution

Five node types. Infinite combinations.

Each pipeline is a directed acyclic graph of nodes. Connect them in any order—branch, merge, chain—to build exactly the transformation you need.

Input / Trigger

Entry point

The starting point of every pipeline. Receives the raw job JSON from the outbound feed and passes it downstream to connected nodes.

Liquid Template

Transform

Reshape and remap JSON fields using Liquid templating syntax. Fast, deterministic, and free—ideal for field renaming, date formatting, and filtering.

AI (OpenRouter)

Enrich

Send data to any LLM—GPT-4o, Claude, Gemini, Llama—via OpenRouter. Extract skills, classify roles, generate SEO metadata, or rewrite descriptions.

JSON Merge

Combine

Merge two JSON inputs (base + patch) into a single object. Deep or shallow strategy with configurable conflict resolution.

Output

Exit point

The final node. The JSON from this node is what gets pushed to your destination. Every pipeline ends here.

Liquid Node

Deterministic field remapping.

Reshape and remap JSON fields using Liquid templating syntax. The input data is available as {{ data.fieldName }}. Your template must output valid JSON.

Available variables

dataobject

The full JSON object from the parent node. Access nested fields with dot notation: data.company.name

data.titlestring

Job title.

data.descriptionstring

Full job description (HTML).

data.companyobject

Company object with name, url, and logo fields.

data.locationobject

Location object with city, state, and country fields.

data.apply_urlstring

Direct URL to the job application page.

Liquid template
{
  "jobTitle": "{{ data.title }}",
  "employer": "{{ data.company.name }}",
  "location": "{{ data.location.city }}, {{ data.location.state }}",
  "applyUrl": "{{ data.apply_url }}",
  "postedAt": "{{ data.published_at | date: '%Y-%m-%d' }}"
}
AI Node

LLM-powered enrichment.

Send job data to any model on OpenRouter—GPT-4o, Claude, Gemini, Llama, and more. Extract skills, classify seniority, generate SEO metadata, or rewrite descriptions in seconds.

Configuration

modelstringrequired

OpenRouter model ID, e.g. openai/gpt-4o-mini.

systemPromptstringrequired

Instructions for the AI model. Should include "respond with valid JSON".

userPromptstringrequired

The prompt template. Use {{input}} to inject the parent node's output.

outputSchemastring

Optional JSON schema to constrain the AI's output structure.

System prompt
You are a job data extraction assistant. Always respond with valid JSON. Extract technical skills, soft skills, and seniority level from the job posting.
User prompt
Extract skills and seniority from this
job posting:

{{input}}

Respond with:
{
  "technical_skills": ["skill1", "skill2"],
  "soft_skills": ["skill1"],
  "seniority": "senior" | "mid" | "junior"
}
JSON Merge Node

Combine two inputs into one.

The JSON Merge node has two input handles—base and patch. It merges them into a single JSON object, preserving original job data while adding AI-generated fields.

Configuration

strategyenum

Deep — recursively merges nested objects. Shallow — only merges top-level keys. Default: Deep.

conflictResolutionenum

PatchWins — patch values override base on conflict. BaseWins — base values are preserved. Default: PatchWins.

Base (original data)
{ "title": "Software Engineer", "company": { "name": "Acme" }, "location": "San Francisco" }
+
Patch (AI output)
{ "skills": ["Python", "AWS"], "seniority": "senior", "salary_estimate": "$150k-$180k" }
=
Merged result
{ "title": "Software Engineer", "company": { "name": "Acme" }, "location": "San Francisco", "skills": ["Python", "AWS"], "seniority": "senior", "salary_estimate": "$150k-$180k" }
Example Flows

Common patterns, ready to use.

Each uses the same core pattern: Input → Transform → Merge → Output. Start from a template in the pipeline editor or build your own from scratch.

Extract

Extract Requirements

Pull structured requirements and responsibilities from free-text job descriptions using AI, then merge them back into the original job object.

InputAI ExtractJSON MergeOutput
Enrich

Generate SEO Metadata

Generate seoTitle, seoDescription, seoSlug, seoKeywords, and a structuredData snippet for search engine optimization.

InputAI SEOJSON MergeOutput
Format

Remap for ATS Feed

Rename and restructure fields to match your internal ATS schema using Liquid templates. No AI cost, instant execution.

InputLiquid RemapOutput
Extract

Classify Seniority

Use an AI node to classify job postings into junior, mid, senior, or lead seniority levels based on title and description.

InputAI ClassifyJSON MergeOutput
Rewrite

Rewrite Descriptions

Rewrite job descriptions to match your brand voice, tone, and formatting standards using an AI node.

InputAI RewriteJSON MergeOutput
Enrich

Salary Normalization

Extract and normalize salary ranges from unstructured descriptions into consistent min/max/currency/period fields.

InputAI SalaryJSON MergeOutput
Templates

Pre-built for common tasks.

The template picker offers ready-made pipelines organized by category. Pick one, customize the prompts, and attach it to your feed.

Extract

Pull structured data from unstructured job descriptions. Extract skills, requirements, qualifications, and more.

Skills extractionRequirements parsingQualification mapping

Rewrite

Transform content with AI. Rewrite descriptions, translate text, adjust tone, or summarize long postings.

Description rewriteTone adjustmentText translation

Enrich

Add new fields to job records. Generate SEO metadata, salary estimates, seniority classifications, and department tags.

SEO metadataSalary estimationDepartment tagging

Format

Reshape JSON structure without AI. Rename fields, format dates, filter properties, and restructure nested objects.

Field renamingDate formattingSchema conversion
Integration

Attach to an outbound feed.

Pipelines don't run in isolation—they're attached to outbound feeds. When a feed syncs, each job passes through the pipeline before being written to the destination.

1

Create your pipeline

Build and test your pipeline in the visual editor. Verify each node's output with real job data.

2

Open your outbound feed

Navigate to the outbound feed page and select the feed you want to transform.

3

Select a pipeline

In the feed settings, choose your pipeline from the dropdown. It runs automatically on every sync cycle.

One pipeline per feed

Each outbound feed can have at most one pipeline attached. To apply different transformations, create separate feeds with different pipelines.

Execution

DAG-based, per-job processing.

Pipelines execute using topological sort (Kahn's algorithm). Each node runs only after all its input dependencies have completed.

Per-job execution

Each job in the feed runs through the pipeline independently. Failures on one job don't affect others.

Node-level results

Each node produces an output JSON and timing info. Use the test panel to inspect individual node results.

Error handling

If a node fails, the pipeline stops for that job and reports the error. The job is skipped in the destination push.

Idempotent

Running the same job through the same pipeline produces the same output, assuming deterministic AI settings.

Best practices.

Follow these guidelines to build reliable, cost-effective transformation pipelines.

01

Always test with real data.

Use the test panel to load a real job from your feed and verify each node's output before attaching to a live feed.

02

Use JSON Merge to preserve original data.

Never send all data through AI and use the raw output. Branch from Input to both AI and Merge so original fields are always preserved.

03

Keep AI prompts focused.

Ask the AI to extract or transform specific fields rather than processing the entire job object. This reduces cost and improves reliability.

04

Use Liquid for deterministic transforms.

For simple field renaming, date formatting, or filtering—use Liquid instead of AI. It's faster, cheaper, and 100% deterministic.

05

Start from templates.

The template picker offers pre-built pipelines for common use cases. Start there and customize rather than building from scratch.

Frequently asked.

Common questions about transformation pipelines, node types, and execution.

Can I use multiple AI nodes in one pipeline?

Yes. You can chain AI nodes or run them in parallel branches. Each AI node can use a different model and prompt.

What happens if the AI model is slow or times out?

AI nodes have a configurable timeout. If the model doesn't respond in time, the node fails and the job is skipped for that sync cycle. It'll be retried on the next cycle.

Do pipelines add latency to my feed sync?

Liquid and JSON Merge nodes are near-instant. AI nodes add latency proportional to the model's response time (typically 1–5 seconds per job). Jobs are processed in parallel batches.

Can I use pipelines without an outbound feed?

Pipelines are designed to work with outbound feeds. However, you can test them standalone using the test panel in the pipeline editor.

Is there a limit on the number of nodes?

There's no hard limit, but we recommend keeping pipelines under 10 nodes for maintainability. Complex logic can often be simplified with better Liquid templates or more focused AI prompts.

What AI models are supported?

Any model available on OpenRouter—including GPT-4o, GPT-4o-mini, Claude 3.5 Sonnet, Gemini Pro, Llama 3, and hundreds more. You can choose different models for different AI nodes.

Ready to transform your job data?

Build your first pipeline in the visual editor. Start from a template or wire nodes from scratch.