OpenAI Chat Model
Use OpenAI's GPT models as the LLM in your AI agent workflows
Written By pvdyck
Last updated About 5 hours ago
OpenAI Chat Model
Connects to OpenAI's API to provide GPT model capabilities for Agent and Chain LLM workflows.
Authentication
Requires an OpenAI API key configured via Secure Vault. Token usage counts toward your OpenAI API billing.
Parameters
Available Models
Model availability depends on your OpenAI API plan.
Options
Connects To
Limitations
- Must be connected to a parent node β cannot run standalone.
- No streaming support in the Worker environment.
- JSON Mode requires the word "JSON" somewhere in your prompt for reliable output.
Tips
- Use GPT-4o Mini for most workflows β fast and cost-effective.
- Use GPT-4o for complex reasoning, vision tasks, or when accuracy is critical.
- Enable JSON Mode when combining with Output Parser Structured for reliable structured output.
- Set Frequency Penalty to 0.5-1.0 to reduce repetitive responses.
- Sub-node expression note: As a sub-node, expressions like
{{ $json.name }}always resolve to the first input item, not each item individually. Use the parent node's batch processing if you need per-item model calls. - Consider using OpenRouter Chat Model instead for access to GPT models plus other providers through a single credential.