Integrate Vector Search with AI Technologies
On this page
You can use Atlas Vector Search with popular AI providers and LLMs through their standard APIs. MongoDB and partners also provide specific product integrations to help you leverage Atlas Vector Search in your generative AI and AI-powered applications.
This page highlights notable AI integrations that MongoDB and partners have developed. For a complete list of integrations and partner services, see Explore MongoDB Partner Ecosystem.
Key Concepts
- Large Language Models (LLMs)
You can integrate Atlas Vector Search with LLMs and LLM frameworks to build AI-powered applications. When developing with LLMs, you might encounter the following limitations:
Stale data: LLMs are trained on a static dataset up to a certain point in time.
No access to local data: LLMs don't have access to local or personal data.
Hallucinations: LLMs sometimes generate inaccurate information.
- Retrieval-Augmented Generation (RAG)
Retrieval-Augmented Generation (RAG) is an architecture for LLM applications that's designed to address these limitations. In RAG, you perform the following actions:
Store your custom data as vector embeddings in a vector database.
Use vector search to retrieve semantically similar documents from the vector database. These documents augment the existing training data that LLMs have access to.
Prompt the LLM. The LLM uses these documents as context to generate a more informed and accurate response.
To learn more, see What is retrieval-augmented generation (RAG)?.
Frameworks
You can integrate Atlas Vector Search with the following open-source frameworks to store custom data in Atlas and implement RAG with Atlas Vector Search.
LangChain
LangChain is a framework that simplifies the creation of LLM applications through the use of "chains," which are LangChain-specific components that can be combined together for a variety of use cases, including RAG.
To get started, see the following tutorials:
LlamaIndex
LlamaIndex is a framework that simplifies how you connect custom data sources to LLMs. It provides several tools to help you load and prepare vector embeddings for RAG applications.
To get started, see Get Started with the LlamaIndex Integration.
Semantic Kernel
Microsoft Semantic Kernel is an SDK that allows you to combine various AI services with your applications. You can use Semantic Kernel for a variety of use cases, including RAG.
To get started, see Get Started with the Semantic Kernel Integration.
Haystack
Haystack is a framework for building custom applications with LLMs, embedding models, vector search, and more. It enables use cases such as question-answering and RAG.
To get started, see Get Started with the Haystack Integration.
Services
You can also integrate Atlas Vector Search with the following AI services.
Amazon Bedrock Knowledge Base
Amazon Bedrock is a fully-managed service for building generative AI applications. You can integrate Atlas Vector Search as a knowledge base for Amazon Bedrock to store custom data in Atlas and implement RAG.
To get started, see Get Started with the Amazon Bedrock Knowledge Base Integration.
API Resources
Refer to the following API resources as you develop with AI integrations for Atlas Vector Search: