Skip to main content

Input Node

The entry point for data. Receives raw job JSON from the Outbound Feed sync engine. Every pipeline starts with exactly one Input node.
PropertyTypeDescription
sourceenumjobs_feed, webhook, manual
filtersobjectOptional filters (same as feed filters)

Liquid Template

Reshape and remap JSON fields using Liquid template syntax. The Liquid node takes the output of its parent node as input and renders a Liquid template against it. Your template must output valid JSON.
PropertyTypeRequiredDescription
templatestringYesLiquid template string that outputs JSON

Available Variables

The input data is available via data and dot notation:
VariableTypeDescription
dataobjectThe full JSON object from the parent node. Access nested fields with dot notation: data.company.name
data.titlestringJob title
data.descriptionstringFull job description (HTML)
data.companyobjectCompany object with name, url, logo
data.company.namestringCompany name
data.company.urlstringCompany website URL
data.company.logostringCompany logo URL
data.locationobjectLocation object with city, state, country
data.apply_urlstringApplication URL
data.published_atstringPublication date
data.tagsarrayArray of tag strings

Available Filters

All standard Shopify Liquid filters are supported, plus these custom filters:
FilterDescriptionExample
slugifyConverts a string to a URL-safe slug{{ data.title | slugify }}software-engineer
truncate_wordsTruncates to a number of words{{ data.description | truncate_words: 50 }}
json_escapeEscapes a string for safe JSON embedding{{ data.description | json_escape }}

Example: Remap Fields

{
  "jobTitle": "{{ data.title }}",
  "employer": "{{ data.company.name }}",
  "location": "{{ data.location.city }}, {{ data.location.state }}",
  "applyUrl": "{{ data.apply_url }}",
  "postedAt": "{{ data.published_at | date: '%Y-%m-%d' }}"
}

Example: SEO Slug Generation

{
  "seo_title": "{{ data.title }} at {{ data.company.name }} - {{ data.location.city }}",
  "slug": "{{ data.title | slugify }}-{{ data.company.name | slugify }}-{{ data.id | slice: 0, 8 }}",
  "is_engineering": {% if data.tags contains 'engineering' %}true{% else %}false{% endif %}
}

AI (OpenRouter)

Use LLM intelligence to extract, classify, enrich, or rewrite job data. The AI node sends data to any model available on OpenRouter (GPT-4o, Claude, Gemini, Llama, etc.). Configure a system prompt for behavior and a user prompt with the {{input}} placeholder for the data.

Configuration

PropertyTypeRequiredDefaultDescription
modelstringYesOpenRouter model ID (e.g., google/gemini-2.0-flash-001, openai/gpt-4o-mini, anthropic/claude-3.5-sonnet)
systemPromptstringYesSystem instructions for the AI. Should include “respond with valid JSON” for structured output.
userPromptstringYesThe prompt template. Use {{input}} to inject the parent node’s output as context.
outputSchemaobjectNoOptional JSON Schema to constrain the AI’s output structure. Strongly recommended for reliability.
temperaturenumberNo0.3LLM temperature (0–2). Lower = more consistent, higher = more creative.
The {{ input }} placeholder in the user prompt is replaced with the full JSON output from the parent node. This is the primary way to pass data to the AI model.

Supported Models

Any model available on OpenRouter can be used. Popular choices:
ModelBest For
google/gemini-2.0-flash-001Fast, cost-effective structured extraction
openai/gpt-4o-miniBalanced quality and speed
openai/gpt-4oHighest quality extraction and generation
anthropic/claude-3.5-sonnetComplex reasoning and long descriptions

Example: Extract Skills

System Prompt:
You are a job data extraction assistant. Always respond with valid JSON.
Extract technical skills, soft skills, and seniority level from the job posting.
User Prompt:
Extract skills and seniority from this job posting:

{{input}}

Respond with:
{
  "technical_skills": ["skill1", "skill2"],
  "soft_skills": ["skill1"],
  "seniority": "senior" | "mid" | "junior" | "lead"
}

Full Configuration Example

{
  "model": "google/gemini-2.0-flash-001",
  "systemPrompt": "You are a job data analyst. Extract structured requirements from job descriptions. Always respond with valid JSON.",
  "userPrompt": "Extract requirements and responsibilities from this job:\n\n{{input}}",
  "outputSchema": {
    "type": "object",
    "properties": {
      "requirements": {
        "type": "object",
        "properties": {
          "must_have": { "type": "array", "items": { "type": "string" } },
          "preferred": { "type": "array", "items": { "type": "string" } }
        }
      },
      "responsibilities": {
        "type": "array",
        "items": { "type": "string" }
      }
    }
  },
  "temperature": 0.2
}

JSON Merge

Combine two JSON inputs into a single unified object. The JSON Merge node has two input handles: base (original data) and patch (new data, typically AI output). This is essential for preserving original job data while adding AI-generated fields.

Configuration

PropertyTypeRequiredDefaultDescription
strategyenumNoDeepDeep — recursively merges nested objects. Shallow — only merges top-level keys.
conflictResolutionenumNoPatchWinsPatchWins — patch values override base on conflict. BaseWins — base values are preserved on conflict.

Deep Merge Example

{
  "title": "Software Engineer",
  "company": { "name": "Acme" },
  "location": "San Francisco"
}

Nested Deep Merge Example

When both base and patch have nested objects, deep merge recursively combines them:
{
  "title": "SWE",
  "skills": { "hard": ["Python"] }
}
Use Deep + PatchWins (the defaults) in most cases. This preserves all original fields while letting AI-generated data override specific nested values.

Output Node

The exit point for processed data. The final JSON from this node is what gets pushed to your destination. Every pipeline ends with exactly one Output node.
PropertyTypeDescription
destinationenumdatabase, webhook, file
configobjectDestination-specific configuration