Skip to main content

Vector Indexes

Large Language Models (LLMs) are awesome at generating human-like text, but they have a limitation: they can only understand and use information they were trained on or that you directly provide in the prompt.

Retrieval Augmented Generation (RAG)

This is where Retrieval Augmented Generation (RAG) comes in. RAG uses Vector Indexes to help LLMs work with your unique data. They transform your text, documents, or any other information into numbers called vectors, creating a searchable index. To build a Vector Index, you preprocess your data, break it into smaller pieces (chunking), and convert it into vectors (embedding). RAG combines Vector Indexes with LLMs to retrieve the most relevant information from your indexed data and inject it into the prompt. This gives LLMs the context and specific data they need to generate more accurate and relevant responses tailored to your domain. It's important to note that RAG doesn't change how LLMs fundamentally understand information; it just provides a tool to retrieve and include relevant data in the prompt.

Don't worry, BotDojo makes this really easy, and in the next section, we will walk you through importing your data into Vector Indexes and hooking it up to a Flow.