Skip to main content

Documentation Index

Fetch the complete documentation index at: https://www.courier.com/docs/llms.txt

Use this file to discover all available pages before exploring further.

AI nodes run a language model prompt during journey execution and return structured output. The response merges into the journey context, making it available to downstream branch conditions, send templates, and other nodes. Use AI nodes to generate personalized notification copy, classify users based on behavior data, score engagement signals, or enrich profiles with structured insights that drive downstream routing.

Configuration

Click the AI node on the canvas to open the summary panel, then click Configure to open the full configuration drawer.

Model

Select the LLM to use for this node. Available models:
ProviderModelCredits per invocation
OpenAIGPT-5.56
OpenAIGPT-5.43
OpenAIGPT-5.4 Mini1
OpenAIGPT-5.4 Nano0.25
OpenAIGPT-5 Nano0.1
AnthropicClaude Opus 4.75
AnthropicClaude Opus 4.65
AnthropicClaude Sonnet 4.63
AnthropicClaude Haiku 4.51
These are base costs per invocation. Output tokens beyond 1,000 and input tokens beyond ~3,000 (roughly 8,000 characters) are billed as overage at provider rates plus a small margin.
Start with a lower-cost model (Haiku or GPT-5.4 Mini) and upgrade only if the output quality is insufficient for your use case. For most classification and extraction tasks, smaller models perform well.
Toggle web search to let the model query the internet for real-time information before responding. This is available for Anthropic (Claude) models only. Each invocation with web search enabled costs an additional 2 credits. Use web search when the prompt needs current information that isn’t in the journey context; for example, looking up a company’s latest funding round before generating outreach copy.

Prompt

Write a natural-language prompt describing what you want the model to do. The prompt field supports {{variable}} interpolation; type {{ to see available fields from the trigger schema, profile, or upstream fetch responses. For the onboarding nudge example below, the prompt might be:
You are a product growth assistant. Based on this user's profile and activity, write a personalized onboarding nudge that encourages them to try the next feature most relevant to their use case.

Rules:
- subject_line should be under 60 characters, personalized, and action-oriented
- body_copy should be 1-2 sentences, friendly, and reference what they've already done
- recommended_feature must be exactly one of: templates, automations, integrations, analytics
- tone must be exactly one of: encouraging, celebratory, urgent

User: {{data.user_name}}
Plan: {{data.plan_name}}
Features used so far: {{data.features_used}}
Days since signup: {{data.days_since_signup}}
A character counter in the top-right of the drawer shows prompt + schema size in real time. Keeping your prompt concise reduces input token usage and avoids overage charges.

Output Schema

The output schema defines the structure of the model’s response. You can define it in two modes: Form mode (default) — Add fields with a name and type (string, number, or boolean). Optionally add a description to guide the model on what each field should contain. JSON mode — Write a JSON Schema directly. Use this for more complex schemas or when pasting from an existing definition. For the onboarding nudge example, the form-mode schema would be:
FieldTypeDescription
subject_linestringPersonalized email subject under 60 characters
body_copystring1-2 sentence nudge referencing the user’s activity
recommended_featurestringOne of: templates, automations, integrations, analytics
tonestringOne of: encouraging, celebratory, urgent
The model’s response is parsed as JSON and merged into the journey context. Downstream nodes reference the fields the same way they reference trigger or fetch data:

Conditions

Like other nodes, AI nodes support optional conditions. If the conditions are not met when the run reaches the node, it’s skipped and no credits are consumed.

Testing

Click Test in the configuration drawer to open the test panel. The test panel lets you run the prompt against the selected model with sample variable values and see the structured response before publishing the journey.
The test result shows the parsed JSON output on the right, or an error message if the request failed.

Example: Personalized Onboarding Nudge

A journey that uses the AI node to generate tailored onboarding messages based on each user’s activity:
  1. Trigger — Fires on day 3 after signup. The trigger schema includes user_name, plan_name, features_used, and days_since_signup.
  2. Fetch Data — Pulls the user’s recent activity summary from your application API.
  3. AI node — Given the user’s profile and activity, generates a personalized nudge with subject_line, body_copy, recommended_feature, and tone. Model: Claude Sonnet 4.6 (3 credits per run).
  4. Branch — Routes based on data.tone:
    • Path “Celebratory”: user has been active → sends a congratulatory email with {{subject_line}} and {{body_copy}}.
    • Path “Urgent”: user hasn’t engaged → sends a push notification to re-engage.
    • Default: sends a standard onboarding email with the AI-generated copy.
The AI node eliminates the need for dozens of template variants. Instead of maintaining separate templates for each user segment, a single journey produces personalized content for every recipient.

Common Patterns

Score and classify users: Feed product usage, behavior, and profile data into the LLM. Route the journey based on risk level, intent, engagement, or any structured category the model returns. Generate personalized notifications: Give the model your journey context and get back tailored subject lines, body copy, and recommended actions. Personalized content for every recipient without dozens of template variants. Enrich user profiles: Classify users into personas, derive lifecycle stage, or generate account summaries. Persist outputs to the profile so every future journey starts with richer context. Structured output for downstream logic: Define an output schema with field names, types, and enums. The LLM returns structured JSON that branch conditions, send nodes, and downstream integrations can act on directly.

Debugging AI Nodes

Open Run Inspection and click the AI node step to see:
  • The model used and whether web search was enabled
  • The resolved prompt (with variables substituted)
  • The output schema that was sent to the model
  • The structured JSON response
  • Token usage (input and output token counts)
If the AI node fails (model error or timeout), the step context shows the error. The journey continues but no output is merged into the context. Use a branch condition downstream to handle cases where expected AI output fields may be missing.

What’s Next

Branch

Route based on AI output fields

Fetch Data

Enrich context before the AI node processes it

Journey Templates

Use AI output as template variables

Run Inspection

Debug AI node prompts and responses