LangChain: Embeddings & Vector Stores

Supabase, Pinecone, and Qdrant Vector Stores plus OpenAI and Ollama Embeddings are supported. Other vector store and embedding nodes require infrastructure not available on the platform.

Written By pvdyck

Last updated 7 days ago

LangChain: Embeddings & Vector Stores

Supported

  • Embeddings OpenAI (embeddingsOpenAi) β€” Full n8n feature parity
  • Embeddings Ollama (embeddingsOllama) β€” Full n8n feature parity
  • Supabase Vector Store (vectorStoreSupabase) β€” All modes: retrieve, insert, load, update, retrieve-as-tool
  • Pinecone Vector Store (vectorStorePinecone) β€” All modes: retrieve, insert, load, update, retrieve-as-tool
  • Qdrant Vector Store (vectorStoreQdrant) β€” All modes: retrieve, insert, load, update, retrieve-as-tool

Not Supported

  • Vector Store In-Memory (vectorStoreInMemory)
  • Vector Store Weaviate (vectorStoreWeaviate)
  • Vector Store Chroma (vectorStoreChroma)
  • Vector Store Milvus (vectorStoreMilvus)
  • Vector Store Zep (vectorStoreZep)
  • Other Embeddings (Cohere, Azure, HuggingFace, Mistral)

Why Not Supported

These nodes require:

  • Connection protocols not available on the platform (Weaviate, Milvus)
  • Infrastructure not available on the platform (In-Memory persistence, Zep, Chroma)

Workarounds

  • Use Supabase, Pinecone, or Qdrant Vector Store with Embeddings OpenAI or Embeddings Ollama for RAG workflows
  • Use OpenRouter Chat Model with the Agent Node for most RAG-like use cases
  • For other vector databases: use their REST APIs via the HTTP Request node

Related