What are Pipelines?
Transformation Pipelines let you reshape, enrich, and transform job data before it’s pushed to an external destination via an Outbound Feed. Build visual flows by dragging nodes onto a canvas, connecting them, and configuring each step — no code required. Pipelines are attached to Outbound Feeds and run automatically on every sync cycle. Each job passes through your pipeline before being written to the destination (PostgreSQL, Elasticsearch, Algolia, etc.).Pipelines can only be used with Outbound Feeds. They run automatically
during each feed sync cycle — you cannot trigger them independently (though
you can test them standalone in the editor).
Feature Highlights
Liquid Templates
Rename fields, filter arrays, format dates, and reshape JSON using the
Liquid templating language.
Sub-millisecond execution.
AI Transforms
Send data to GPT-4o, Claude, Gemini, Llama, or any
OpenRouter model to extract skills, classify jobs,
or rewrite descriptions.
JSON Merge
Combine AI output with original data using deep or shallow merge with
configurable conflict resolution. Preserves all existing fields.
Node Types
Every pipeline has exactly one Input node and one Output node. In between, you can add any combination of transformation nodes.Input
Entry point — receives raw job JSON from the Outbound Feed sync engine
Liquid Template
Transform data using Liquid template syntax with custom filters
AI (OpenRouter)
Process data with any LLM via OpenRouter with structured JSON output
JSON Merge
Combine two JSON inputs (base + patch) with configurable strategies
Output
Exit point — final JSON pushed to your destination
How It Works
Every pipeline is a directed acyclic graph (DAG) of nodes. Data flows from the Input node through transformation nodes and arrives at the Output node.Create a Pipeline
Open the visual editor in Data → Pipelines in the dashboard. Start from
a blank canvas or choose a pre-built template (field remapping, AI
enrichment, etc.).
Add & Connect Nodes
Drag Liquid, AI, or JSON Merge nodes from the palette onto the canvas.
Connect outputs to inputs to define the data flow.
Configure Each Node
Click a node to open its config panel. Write Liquid templates, set AI
prompts and output schemas, or choose merge strategies.
Test with Real Data
Use the built-in test panel to run a real job from your feed through the
pipeline and inspect each node’s output before going live.
Attaching to Outbound Feeds
Pipelines don’t run in isolation — they’re attached to Outbound Feeds. When a feed syncs, each job passes through the attached pipeline before being written to the destination.- Create your pipeline — build and test it in the Pipeline Editor
- Open your Outbound Feed — navigate to the Outbound Feed page and select the feed you want to transform
- Select a pipeline — in the feed settings, choose your pipeline from the dropdown
One pipeline per feed. Each outbound feed can have at most one pipeline
attached. To apply different transformations, create separate feeds with
different pipelines.
Execution Model
- Nodes execute in topological order using Kahn’s algorithm (respecting dependencies)
- Independent nodes run in parallel for maximum throughput
- Each job runs through the pipeline independently — failures on one job don’t affect others
- Each node produces an output JSON and timing info
- Failed nodes halt downstream execution but don’t affect parallel branches

