OpenAI Chat Model

Use OpenAI's GPT models as the LLM in your AI agent workflows

Written By pvdyck

Last updated About 3 hours ago

OpenAI Chat Model

Connects to OpenAI's API to provide GPT model capabilities for Agent and Chain LLM workflows.

Authentication

Requires an OpenAI API key configured via Secure Vault. Token usage counts toward your OpenAI API billing.

Parameters

ParameterDescription
ModelThe GPT model to use (see table below).
TemperatureControls randomness (0 = deterministic, 1 = creative). Default: 0.7.
Max TokensMaximum number of tokens in the response.
Top PNucleus sampling (0-1). Alternative to temperature.
Frequency PenaltyReduces repetition of frequent tokens (-2.0 to 2.0).
Presence PenaltyEncourages topic diversity (-2.0 to 2.0).

Available Models

ModelBest For
GPT-4oMost capable, multimodal (text + vision)
GPT-4o MiniFast, affordable, good for most tasks
GPT-4 TurboHigh capability with large context
GPT-4Strong reasoning, older model
GPT-3.5 TurboCheapest, basic tasks only

Model availability depends on your OpenAI API plan.

Options

OptionDescription
JSON ModeForces the model to output valid JSON. Set response_format: json_object.
SeedFor reproducible outputs. Same seed + input = same output (beta).
System PromptPersistent instructions for the model's behavior.
TimeoutMaximum request duration in milliseconds. Prevents hanging on slow responses.
Max RetriesNumber of automatic retry attempts if the API request fails.

Connects To

Parent NodeDescription
AgentPowers the agent's reasoning and tool-calling decisions.
Chain LLMGenerates text responses from prompts.
Information ExtractorExtracts structured data from text.
Text ClassifierClassifies text into categories.
Sentiment AnalysisAnalyzes text sentiment.

Limitations

  • Must be connected to a parent node β€” cannot run standalone.
  • No streaming support in the Worker environment.
  • JSON Mode requires the word "JSON" somewhere in your prompt for reliable output.

Tips

  • Use GPT-4o Mini for most workflows β€” fast and cost-effective.
  • Use GPT-4o for complex reasoning, vision tasks, or when accuracy is critical.
  • Enable JSON Mode when combining with Output Parser Structured for reliable structured output.
  • Set Frequency Penalty to 0.5-1.0 to reduce repetitive responses.
  • Sub-node expression note: As a sub-node, expressions like {{ $json.name }} always resolve to the first input item, not each item individually. Use the parent node's batch processing if you need per-item model calls.
  • Consider using OpenRouter Chat Model instead for access to GPT models plus other providers through a single credential.

Related