Chain LLM Node
Connect an LLM to a prompt template for text generation.
Written By pvdyck
Last updated 18 minutes ago
Chain LLM Node
The Chain LLM (Basic LLM Chain) node sends a prompt to a language model and returns the generated text. Use it for straightforward text generation tasks that do not require tool access or multi-step reasoning.
How It Works
You provide a prompt (with optional dynamic expressions), the node sends it to the connected LLM, and returns the model's response. You can add chat messages (System, User, AI) to build a structured conversation context.
Parameters
Options (V1.6+)
Sub-Node Connections
Versions
V1 through V1.7. Key additions: JSON unwrapping (V1.6), batch processing (V1.7).
Limitations
- Static node connections β Model and output parser connections are statically defined (Worker environment constraint). Prompt templates support dynamic expressions.
- No tool access β for multi-step reasoning with tools, use the Agent node instead.
Tips
- Use Chat Messages with a System message to set the model's persona or instructions.
- Combine with Output Parser Structured for reliable JSON output.
- For batch processing, set a batch delay (e.g., 500ms) to avoid API rate limits.
- "No prompt specified" error: This occurs when the Prompt field is empty or the incoming data lacks the expected
chatInputfield. Add an Edit Fields node before the Chain LLM to rename your input field tochatInput.