Our experts help founders grow with dedicated one-on-one sessions that offer personalized recommendations to add scale and solve problems.
Receive free credits for MongoDB products, including Atlas Database, Atlas Vector Search (preview), Atlas Search, Atlas App Services, and more, to supercharge your data infrastructure.
Get your AI solution in front of our users, customers, and partners. Collaborate with our product and partner teams. Earn the opportunity to create go-to-market partnerships with MongoDB.
In order to leverage the foundation models or large language models (LLMs) that power generative-AI applications, organizations need to be able to train and prompt them with their own data. By doing this, the models can give responses that better reflect the real-time “ground truth.” This is called retrieval-augmented generation” (RAG), and it enables you to provide the context necessary to craft AI-generated outputs that are reliable, relevant, and accurate.
To prompt AI models with your own data, you need to first turn it into vector embeddings. These vectors provide multi-dimensional numerical encodings of your data that captures its patterns, relationships, and structures. Vector embeddings give your data semantic meaning; calculating the distance between vectors makes it easy for applications to understand the relationships and similarities between different data objects.
Querying these vectors allows you to extend information search and discovery beyond keyword matching to context-aware semantic search that is capable of inferring meaning and intent from a user’s search term.
All of this data needs to be queried to power application functionality, not just to surface semantic meaning through vector search, but also to accomplish the regular operations such as key-value lookups, handling a firehose of updates to the data, and running sophisticated aggregations and transformations supporting analytics processing. These queries power application features outside of any generative AI use cases. But they become even more important when we can use them alongside in-context prompts to foundation models, improving accuracy and relevance of the outputs our LLM generates.
MongoDB Atlas integrates operational and vector databases in a single, unified platform. It provides foundation models with the up-to-date information needed to build context-aware, generative AI-powered applications. And it also handles full-text search, real-time analytics, and event-driven experiences so developers can build a wide range of application services while keeping the data architecture simple.
For more information on Atlas AI capabilities, visit our AI hub.
Early-stage startups: This program is intended for startups at Series A or earlier.
Product or Service: Startups building a product or service (dev shops, consultants, or marketing agencies are not eligible).
MongoDB for Startups Alumni: Startups must have already been accepted into the MongoDB for Startups program to be eligible for the AI Innovators track.
Website: A live, functioning website.
Company LinkedIn: A valid company profile on either of these sites, and a direct association with the person who is applying.
If you were already accepted into the startup program, you will need to consume your initial credit allotment before receiving your new credits as part of the AI Startup track.
If you are not already participating in the startup program or have already graduated from the startup program, you will receive a unique promo code in your welcome email.