Chain LLM Node

Connect an LLM to a prompt template for text generation.

Written By pvdyck

Last updated 18 minutes ago

Chain LLM Node

The Chain LLM (Basic LLM Chain) node sends a prompt to a language model and returns the generated text. Use it for straightforward text generation tasks that do not require tool access or multi-step reasoning.

How It Works

You provide a prompt (with optional dynamic expressions), the node sends it to the connected LLM, and returns the model's response. You can add chat messages (System, User, AI) to build a structured conversation context.

Parameters

ParameterDescription
PromptThe text prompt sent to the LLM. Supports expressions like {{ $json.input }}. Required.
Chat MessagesOptional structured messages: System (behavioral instructions, e.g., "Always respond as a pirate"), User (text or image input via URL or binary), AI (example responses to shape output style). User messages with images support detail levels: Auto, Low (512px, 65 tokens), or High (129 tokens).
Require Specific Output FormatConnect an Output Parser to enforce structured JSON output.

Options (V1.6+)

OptionDescription
Response FormatSet to json_object to request JSON output from the model.
JSON UnwrappingAutomatically unwraps JSON when the model returns response_format: json_object (V1.6+).
Batch SizeNumber of items to process per batch (V1.7+).
Batch DelayDelay in ms between batches to avoid rate limits (V1.7+).

Sub-Node Connections

InputRequiredDescription
AI Language ModelYesThe LLM to use (e.g., OpenRouter, OpenAI, Anthropic).
Output ParserNoEnforces structured output format.
MemoryNoConversation memory for multi-turn context.
Fallback ModelNoBackup LLM if the primary model fails.

Versions

V1 through V1.7. Key additions: JSON unwrapping (V1.6), batch processing (V1.7).

Limitations

  • Static node connections β€” Model and output parser connections are statically defined (Worker environment constraint). Prompt templates support dynamic expressions.
  • No tool access β€” for multi-step reasoning with tools, use the Agent node instead.

Tips

  • Use Chat Messages with a System message to set the model's persona or instructions.
  • Combine with Output Parser Structured for reliable JSON output.
  • For batch processing, set a batch delay (e.g., 500ms) to avoid API rate limits.
  • "No prompt specified" error: This occurs when the Prompt field is empty or the incoming data lacks the expected chatInput field. Add an Edit Fields node before the Chain LLM to rename your input field to chatInput.

Related