Skip to main content

Design Principles

1

Start simple

Begin with a basic Input → Output flow. Add transformation nodes one at a time and test after each addition.
2

Always test with real data

Use the test panel to load a real job from your feed and verify each node’s output before attaching to a live feed.
3

Preserve original data with JSON Merge

Never send all data through AI and use the raw output. Instead, branch from Input to both AI and Merge, so original fields are always preserved.
4

Use output schemas for AI nodes

Always define output schemas for AI nodes to ensure consistent, parseable results. This makes downstream merge and processing reliable.
5

Prefer Liquid for deterministic transforms

Use Liquid nodes for field remapping, date formatting, and filtering — they’re faster, cheaper, and 100% deterministic compared to AI.
6

Start from templates

The template picker offers pre-built pipelines for common use cases. Start there and customize rather than building from scratch.

Performance & Latency

Understanding the latency profile of each node type helps you design efficient pipelines:
Node TypeLatencyCostDeterministic
Liquid Template< 1ms (instant)FreeYes
JSON Merge< 1ms (instant)FreeYes
AI (OpenRouter)1–5 seconds per jobUses your OpenRouter creditsDepends on temperature
AI nodes are the dominant source of both latency and cost. Each AI node adds 1–5 seconds per job depending on the model and prompt complexity. Jobs are processed in parallel batches to mitigate this, but pipelines with multiple AI nodes will be significantly slower.

AI Timeout & Retry Behavior

  • AI nodes have a configurable timeout (default: 30 seconds)
  • If the model doesn’t respond in time, the node is retried up to 3 times with exponential backoff
  • If all retries fail, the job is skipped for that sync cycle and retried on the next cycle
  • Timeout and retry behavior is automatic — no configuration needed

Pipeline Size Recommendations

Keep pipelines under 10 nodes. While there’s no hard limit, complex pipelines are harder to debug and maintain. If you need more than 10 nodes, consider whether your logic can be simplified with better Liquid templates or more focused AI prompts.

AI Node Tips

  • Set temperature to 0.1–0.3 for structured extraction (maximizes consistency and idempotency)
  • Set temperature to 0.7–1.0 for creative content generation (SEO descriptions, rewrites)
  • Keep prompts specific — ask the AI to extract or transform specific fields rather than processing the entire job object
  • Include examples in the system prompt to improve output quality
  • Use the smallest/fastest model that produces acceptable quality (e.g., gemini-2.0-flash for simple extraction)
  • Always include “respond with valid JSON” in the system prompt

JSON Merge Tips

  • Use Deep merge when enriching nested objects (skills, requirements, metadata)
  • Use Shallow merge when adding top-level fields only
  • Use Patch Wins when AI output should override original data
  • Use Base Wins when original data should take precedence (e.g., preserving human-curated fields)

Common Patterns

Branch and Merge

The most common pattern — send Input to both an AI node and a JSON Merge node. The AI extracts new fields, and JSON Merge combines them with the original data:

Prepare → Transform → Merge

Use a Liquid node to prepare a focused context for AI, reducing token usage and improving quality:

Parallel AI Extraction

Run multiple AI nodes in parallel for different extraction tasks, then merge all results:

FAQ

Yes. Each AI node receives the output of its upstream node. This is useful for multi-step extraction (e.g., extract → classify → summarize). You can also run AI nodes in parallel on independent branches.
AI nodes have a configurable timeout. If the model doesn’t respond in time, the node is retried up to 3 times with exponential backoff. If all retries fail, the job is skipped for that sync cycle and retried on the next cycle.
Liquid and JSON Merge nodes are instant (sub-millisecond). AI nodes add latency proportional to the model’s response time — typically 1–5 seconds per job. Jobs are processed in parallel batches to maximize throughput.
Pipelines are designed to work exclusively with Outbound Feeds. However, you can test them standalone using the test panel in the Pipeline Editor by loading a real job from your data.
There’s no hard limit, but we recommend keeping pipelines under 10 nodes for maintainability. Complex logic can often be simplified with better Liquid templates or more focused AI prompts.
Pipelines are included with your subscription at no extra cost. AI nodes use your own OpenRouter API key and credits — you pay OpenRouter directly for LLM usage.
Not yet. We recommend exporting your configuration (via the test panel JSON output) before making changes, so you can revert if needed.
Each outbound feed supports one pipeline. To apply different transformations, create separate feeds pointed at the same or different destinations, each with its own pipeline.