Design Principles
Start simple
Begin with a basic Input → Output flow. Add transformation nodes one at a
time and test after each addition.
Always test with real data
Use the test panel to load a real job from your feed and verify each node’s
output before attaching to a live feed.
Preserve original data with JSON Merge
Never send all data through AI and use the raw output. Instead, branch from
Input to both AI and Merge, so original fields are always preserved.
Use output schemas for AI nodes
Always define output schemas for AI nodes to ensure consistent, parseable
results. This makes downstream merge and processing reliable.
Prefer Liquid for deterministic transforms
Use Liquid nodes for field remapping, date formatting, and filtering —
they’re faster, cheaper, and 100% deterministic compared to AI.
Performance & Latency
Understanding the latency profile of each node type helps you design efficient pipelines:| Node Type | Latency | Cost | Deterministic |
|---|---|---|---|
| Liquid Template | < 1ms (instant) | Free | Yes |
| JSON Merge | < 1ms (instant) | Free | Yes |
| AI (OpenRouter) | 1–5 seconds per job | Uses your OpenRouter credits | Depends on temperature |
AI Timeout & Retry Behavior
- AI nodes have a configurable timeout (default: 30 seconds)
- If the model doesn’t respond in time, the node is retried up to 3 times with exponential backoff
- If all retries fail, the job is skipped for that sync cycle and retried on the next cycle
- Timeout and retry behavior is automatic — no configuration needed
Pipeline Size Recommendations
AI Node Tips
- Set
temperatureto0.1–0.3for structured extraction (maximizes consistency and idempotency) - Set
temperatureto0.7–1.0for creative content generation (SEO descriptions, rewrites) - Keep prompts specific — ask the AI to extract or transform specific fields rather than processing the entire job object
- Include examples in the system prompt to improve output quality
- Use the smallest/fastest model that produces acceptable quality (e.g.,
gemini-2.0-flashfor simple extraction) - Always include “respond with valid JSON” in the system prompt
JSON Merge Tips
- Use Deep merge when enriching nested objects (skills, requirements, metadata)
- Use Shallow merge when adding top-level fields only
- Use Patch Wins when AI output should override original data
- Use Base Wins when original data should take precedence (e.g., preserving human-curated fields)
Common Patterns
Branch and Merge
The most common pattern — send Input to both an AI node and a JSON Merge node. The AI extracts new fields, and JSON Merge combines them with the original data:Prepare → Transform → Merge
Use a Liquid node to prepare a focused context for AI, reducing token usage and improving quality:Parallel AI Extraction
Run multiple AI nodes in parallel for different extraction tasks, then merge all results:FAQ
Can I chain multiple AI nodes?
Can I chain multiple AI nodes?
Yes. Each AI node receives the output of its upstream node. This is useful
for multi-step extraction (e.g., extract → classify → summarize). You can
also run AI nodes in parallel on independent branches.
What happens if the AI model is slow or times out?
What happens if the AI model is slow or times out?
AI nodes have a configurable timeout. If the model doesn’t respond in time,
the node is retried up to 3 times with exponential backoff. If all retries
fail, the job is skipped for that sync cycle and retried on the next cycle.
Do pipelines add latency to my feed sync?
Do pipelines add latency to my feed sync?
Liquid and JSON Merge nodes are instant (sub-millisecond). AI nodes add
latency proportional to the model’s response time — typically 1–5 seconds
per job. Jobs are processed in parallel batches to maximize throughput.
Can I use pipelines without an Outbound Feed?
Can I use pipelines without an Outbound Feed?
Pipelines are designed to work exclusively with Outbound Feeds. However, you
can test them standalone using the test panel in the Pipeline Editor by
loading a real job from your data.
Is there a limit on the number of nodes?
Is there a limit on the number of nodes?
There’s no hard limit, but we recommend keeping pipelines under 10 nodes for
maintainability. Complex logic can often be simplified with better Liquid
templates or more focused AI prompts.
How are pipelines billed?
How are pipelines billed?
Pipelines are included with your subscription at no extra cost. AI nodes use
your own OpenRouter API key and credits — you pay OpenRouter directly for
LLM usage.
Can I version my pipelines?
Can I version my pipelines?
Not yet. We recommend exporting your configuration (via the test panel JSON
output) before making changes, so you can revert if needed.
One pipeline per feed — can I work around this?
One pipeline per feed — can I work around this?
Each outbound feed supports one pipeline. To apply different
transformations, create separate feeds pointed at the same or different
destinations, each with its own pipeline.

