Chain LLM Node

Connect an LLM to a prompt template for text generation.

Written By pvdyck

Last updated 18 days ago

Chain LLM Node

πŸ§ͺ Labs β€” Experimental, may change or break without notice

What It Does

Sends a prompt to a connected LLM and returns the response. Use it for text generation, summarization, or any task where you need a single LLM call per item.

Connections

InputRequiredDescription
ModelYesThe LLM to call (e.g., OpenRouter, Ollama)
Fallback ModelNoBackup model if the primary fails or hits rate limits
Output ParserNoFormats the LLM response into structured data

Compatibility

FeatureStatus
Sequential item processingSupported β€” each item is sent to the LLM one at a time
Batch processing (V1.7+)Supported β€” process multiple items in parallel batches for efficiency
Fallback modelSupported β€” automatically retries with backup model on failure
JSON unwrapping (V1.6+)Supported β€” when the LLM returns JSON, it's automatically parsed into fields
Continue on FailSupported β€” failed items don't stop the workflow
StreamingNot supported β€” the full LLM response is returned at once
Token counting / cost trackingNot supported β€” API costs are tracked at the platform level, not per-node

Error Handling

When the LLM encounters issues, the node provides clear error messages:

  • Timeout (HTTP 524/504) β€” "AI model timed out" with retry guidance
  • Rate limit (HTTP 429) β€” "AI model rate limited" with wait guidance
  • Unavailable (HTTP 502/503) β€” "AI model temporarily unavailable"

If Continue on Fail is enabled, failed items are returned with an error field instead of stopping the workflow.

Status

This integration has been tested with basic operations on indie.money. Some advanced features may not work as expected. Report issues if you encounter them.