Known Limitations

What's not ready yet — transparency about current platform limitations

Written By pvdyck

Last updated 42 minutes ago

Known Limitations

Transparency matters. Here is what works, what does not, and what to expect.

Node-Specific Limitations

These nodes are supported but have specific constraints:

Code & Function Nodes

  • Synchronous JavaScript only — no async/await, no Promises, no fetch()- No network access — cannot make HTTP calls from code- No Node.js modules — standard library not available- 2-minute CPU limit per code execution

Wait Node

  • Maximum 15 minutes — longer waits are rejected with a validation error- In-memory only, no persistence across restarts

Webhook Node

  • One webhook per workflow — multiple webhook triggers not allowed- Supported methods: GET, POST, PUT, PATCH, DELETE, HEAD- Auth options: Basic Auth, Header Auth

HTTP Request Node

  • Bearer and Basic Auth only via Secure Vault — OAuth2, Digest, and custom auth not supported. Use dedicated integration nodes for OAuth services- In-memory binary support — base64-encoded, ~50MB limit per file. No streaming by ID- 10-second default timeout per request- Pagination expressions limited to $response.body.*, $pageCount (no ternary/method calls)

Merge Node

  • combineBySql mode NOT supported — use other merge modes (append, combineByFields, combineByPosition, combineAll)

Item Lists Node

  • Sort by Code NOT functional — use simple sort or the Sort node instead- Other operations work: concatenate, limit, summarize, split out items

Remove Duplicates Node

  • V1 only — V2 requires database features not available

Discord Node

  • V2 only — V1 not supported- In-memory binary — file attachments supported via base64 (~50MB limit)

Telegram Node

  • In-memory binary support — direct file/binary uploads via base64 (~50MB limit)- Can send images/videos via URL or binary

OpenAI Node

  • Text and chat completions only- Not supported: image generation (DALL-E), audio (TTS), assistants API, file operations

Output Parser Structured

  • $ref in JSON schemas not supported — may produce incorrect types- Use inline schema definitions instead of references

Memory Buffer Window

  • 1-hour TTL — conversation memory auto-expires after 1 hour of inactivity- Memory is scoped per workflow instance and session

Agent Node

  • V2 only — V1 requires database credentials not available

Platform Limits

  • 30-second CPU time per workflow execution (main worker; network I/O does not count toward this limit)- 5-minute wall-clock timeout — total time including network I/O wait- 128MB memory limit per worker invocation- 1MB payload limit — keep data under 1MB per node for reliable execution- No file system access — workflows run in a sandboxed environment. Binary data handled in-memory via base64 (~50MB limit)

Nodes Not Yet Available

  • Database nodes (Postgres, MySQL, Redis, MongoDB) — on the roadmap- Email nodes (Send Email, IMAP) — requires secure proxy infrastructure- Scheduled triggers (Cron, Interval) — planned for automated recurring workflows- Vector stores and embeddings — require external infrastructure- Document loaders — require file system access

Expression Engine

  • Most standard n8n expressions work fine- No const/let — use var instead- No template literals — use string concatenation- Arrow functions are transformed to ES5 automatically

What We Are Working On

Check the roadmap to see what is planned and vote on what you need most. High-demand features get built first.

Found Something Else?

Use the messaging widget in the bottom corner to report issues. Include what you tried, what happened, and what you expected.

Related