Topological Execution
Pipelines execute nodes in topological order using Kahn’s algorithm — each node runs only after all its upstream dependencies have completed. The pipeline engine analyzes the DAG structure and determines the optimal execution order automatically.
In this example:
- Input runs first (no dependencies)
- Liquid and AI run in parallel (both depend only on Input)
- JSON Merge runs after both Liquid and AI complete (depends on both)
- Output runs last (depends on JSON Merge)
Per-Job Execution
Each job in the feed runs through the pipeline independently. This means:
- A failure on one job doesn’t affect processing of other jobs
- Jobs can be processed concurrently across the pipeline
- Each job gets its own execution context and node results
- The pipeline processes jobs in batches matching the feed batch size
Each job passes through the entire pipeline before the next batch begins.
This ensures consistent results and makes debugging easier.
Parallel Execution
Independent nodes (no shared dependencies) execute in parallel for maximum throughput. The pipeline engine automatically detects parallelizable branches from the DAG structure.
In this flow, the two AI nodes and the Liquid node all run simultaneously since they only depend on Input.
Node-Level Results
Each node produces:
| Output | Description |
|---|
| Result JSON | The transformed JSON output from the node |
| Execution time | How long the node took to process |
| Status | success, failed, or skipped |
| Error details | Error message and stack trace (if failed) |
Use the test panel in the Pipeline Editor to inspect individual node results and verify your pipeline step-by-step.
Error Handling
| Scenario | Behavior |
|---|
| Node fails | Downstream nodes of that branch are skipped; parallel branches continue normally |
| AI timeout | Retried up to 3 times with exponential backoff |
| Invalid output schema | Node marked as failed; error logged with details |
| All retries exhausted | Job marked as partial failure and skipped in the destination push |
| Liquid syntax error | Node fails immediately; error includes line/column info |
When a node fails, the pipeline uses a skip-on-failure strategy: the
failed node’s downstream path is abandoned, but independent parallel branches
continue executing normally. This prevents a single failure from blocking
unrelated transformations.
Idempotent Execution
Running the same job through the same pipeline always produces the same output — assuming deterministic AI settings (low temperature). This property is important because:
- Feed syncs may re-process jobs that haven’t changed
- Failed jobs are retried on the next sync cycle
- You can re-run the test panel repeatedly with confidence
Set AI node temperature to 0.1–0.3 for structured extraction to maximize
idempotency. Higher temperatures introduce randomness that may produce
different results on each run.
| Node Type | Latency | Notes |
|---|
| Liquid Template | Sub-millisecond | Deterministic, no external calls |
| JSON Merge | Sub-millisecond | In-memory operation |
| AI (OpenRouter) | 1–5 seconds per job | Depends on model, prompt length, and output size |
- AI nodes are the primary source of latency — minimize their count when possible
- Jobs are processed in parallel batches to maximize throughput
- AI nodes use streaming for large responses to reduce time-to-first-byte