Skip to main content

Topological Execution

Pipelines execute nodes in topological order using Kahn’s algorithm — each node runs only after all its upstream dependencies have completed. The pipeline engine analyzes the DAG structure and determines the optimal execution order automatically. In this example:
  1. Input runs first (no dependencies)
  2. Liquid and AI run in parallel (both depend only on Input)
  3. JSON Merge runs after both Liquid and AI complete (depends on both)
  4. Output runs last (depends on JSON Merge)

Per-Job Execution

Each job in the feed runs through the pipeline independently. This means:
  • A failure on one job doesn’t affect processing of other jobs
  • Jobs can be processed concurrently across the pipeline
  • Each job gets its own execution context and node results
  • The pipeline processes jobs in batches matching the feed batch size
Each job passes through the entire pipeline before the next batch begins. This ensures consistent results and makes debugging easier.

Parallel Execution

Independent nodes (no shared dependencies) execute in parallel for maximum throughput. The pipeline engine automatically detects parallelizable branches from the DAG structure. In this flow, the two AI nodes and the Liquid node all run simultaneously since they only depend on Input.

Node-Level Results

Each node produces:
OutputDescription
Result JSONThe transformed JSON output from the node
Execution timeHow long the node took to process
Statussuccess, failed, or skipped
Error detailsError message and stack trace (if failed)
Use the test panel in the Pipeline Editor to inspect individual node results and verify your pipeline step-by-step.

Error Handling

ScenarioBehavior
Node failsDownstream nodes of that branch are skipped; parallel branches continue normally
AI timeoutRetried up to 3 times with exponential backoff
Invalid output schemaNode marked as failed; error logged with details
All retries exhaustedJob marked as partial failure and skipped in the destination push
Liquid syntax errorNode fails immediately; error includes line/column info
When a node fails, the pipeline uses a skip-on-failure strategy: the failed node’s downstream path is abandoned, but independent parallel branches continue executing normally. This prevents a single failure from blocking unrelated transformations.

Idempotent Execution

Running the same job through the same pipeline always produces the same output — assuming deterministic AI settings (low temperature). This property is important because:
  • Feed syncs may re-process jobs that haven’t changed
  • Failed jobs are retried on the next sync cycle
  • You can re-run the test panel repeatedly with confidence
Set AI node temperature to 0.1–0.3 for structured extraction to maximize idempotency. Higher temperatures introduce randomness that may produce different results on each run.

Performance Characteristics

Node TypeLatencyNotes
Liquid TemplateSub-millisecondDeterministic, no external calls
JSON MergeSub-millisecondIn-memory operation
AI (OpenRouter)1–5 seconds per jobDepends on model, prompt length, and output size
  • AI nodes are the primary source of latency — minimize their count when possible
  • Jobs are processed in parallel batches to maximize throughput
  • AI nodes use streaming for large responses to reduce time-to-first-byte