🌟 Support This Project: Your sponsorship fuels innovation in RAG technologies. Become a sponsor to help maintain and expand this valuable resource!
Welcome to one of the most comprehensive and dynamic collections of Retrieval-Augmented Generation (RAG) tutorials available today. This repository serves as a hub for cutting-edge techniques aimed at enhancing the accuracy, efficiency, and contextual richness of RAG systems.
Don't miss out on cutting-edge developments, new tutorials, and community insights!
Subscribe to DiamantAI's top 1% AI-focused Newsletter
Retrieval-Augmented Generation (RAG) is revolutionizing the way we combine information retrieval with generative AI. This repository showcases a curated collection of advanced techniques designed to supercharge your RAG systems, enabling them to deliver more accurate, contextually relevant, and comprehensive responses.
Our goal is to provide a valuable resource for researchers and practitioners looking to push the boundaries of what's possible with RAG. By fostering a collaborative environment, we aim to accelerate innovation in this exciting field.
🖋️ Check out my Prompt Engineering Techniques guide for a comprehensive collection of prompting strategies, from basic concepts to advanced techniques, enhancing your ability to interact effectively with AI language models.
🤖 Explore my GenAI Agents Repository to discover a variety of AI agent implementations and tutorials, showcasing how different AI technologies can be combined to create powerful, interactive systems.
This repository grows stronger with your contributions! Join our vibrant Discord community — the central hub for shaping and advancing this project together 🤝
RAG Techniques Discord Community
Whether you're an expert or just starting out, your insights can shape the future of RAG. Join us to propose ideas, get feedback, and collaborate on innovative techniques. For contribution guidelines, please refer to our CONTRIBUTING.md file. Let's advance RAG technology together!
🔗 For discussions on GenAI, RAG, or custom agents, or to explore knowledge-sharing opportunities, feel free to connect on LinkedIn.
- 🧠 State-of-the-art RAG enhancements
- 📚 Comprehensive documentation for each technique
- 🛠️ Practical implementation guidelines
- 🌟 Regular updates with the latest advancements
Explore the extensive list of cutting-edge RAG techniques:
-
Simple RAG 🌱
Introducing basic RAG techniques ideal for newcomers.
Start with basic retrieval queries and integrate incremental learning mechanisms.
-
Simple RAG using a CSV file 🧩
Introducing basic RAG using CSV files.
This uses CSV files to create basic retrieval and integrates with openai to create question and answering system.
-
Enhances the Simple RAG by adding validation and refinement to ensure the accuracy and relevance of retrieved information.
Check for retrieved document relevancy and highlight the segment of docs used for answering.
-
Choose Chunk Size 📏
Selecting an appropriate fixed size for text chunks to balance context preservation and retrieval efficiency.
Experiment with different chunk sizes to find the optimal balance between preserving context and maintaining retrieval speed for your specific use case.
-
Breaking down the text into concise, complete, meaningful sentences allowing for better control and handling of specific queries (especially extracting knowledge).
- 💪 Proposition Generation: The LLM is used in conjunction with a custom prompt to generate factual statements from the document chunks.
- ✅ Quality Checking: The generated propositions are passed through a grading system that evaluates accuracy, clarity, completeness, and conciseness.
- The Propositions Method: Enhancing Information Retrieval for AI Systems - A comprehensive blog post exploring the benefits and implementation of proposition chunking in RAG systems.
-
Query Transformations 🔄
Modifying and expanding queries to improve retrieval effectiveness.
- ✍️ Query Rewriting: Reformulate queries to improve retrieval.
- 🔙 Step-back Prompting: Generate broader queries for better context retrieval.
- 🧩 Sub-query Decomposition: Break complex queries into simpler sub-queries.
-
Hypothetical Questions (HyDE Approach) ❓
Generating hypothetical questions to improve alignment between queries and data.
Create hypothetical questions that point to relevant locations in the data, enhancing query-data matching.
- HyDE: Exploring Hypothetical Document Embeddings for AI Retrieval - A short blog post explaining this method clearly.
-
Contextual chunk headers (CCH) is a method of creating document-level and section-level context, and prepending those chunk headers to the chunks prior to embedding them.
Create a chunk header that includes context about the document and/or section of the document, and prepend that to each chunk in order to improve the retrieval accuracy.
dsRAG: open-source retrieval engine that implements this technique (and a few other advanced RAG techniques)
-
Relevant segment extraction (RSE) is a method of dynamically constructing multi-chunk segments of text that are relevant to a given query.
Perform a retrieval post-processing step that analyzes the most relevant chunks and identifies longer multi-chunk segments to provide more complete context to the LLM.
-
Context Enrichment Techniques 📝
Enhancing retrieval accuracy by embedding individual sentences and extending context to neighboring sentences.
Retrieve the most relevant sentence while also accessing the sentences before and after it in the original text.
- Semantic Chunking 🧠
Dividing documents based on semantic coherence rather than fixed sizes.
Use NLP techniques to identify topic boundaries or coherent sections within documents for more meaningful retrieval units.
- Semantic Chunking: Improving AI Information Retrieval - A comprehensive blog post exploring the benefits and implementation of semantic chunking in RAG systems.
- Contextual Compression 🗜️
Compressing retrieved information while preserving query-relevant content.
Use an LLM to compress or summarize retrieved chunks, preserving key information relevant to the query.
- Document Augmentation through Question Generation for Enhanced Retrieval
This implementation demonstrates a text augmentation technique that leverages additional question generation to improve document retrieval within a vector database. By generating and incorporating various questions related to each text fragment, the system enhances the standard retrieval process, thus increasing the likelihood of finding relevant documents that can be utilized as context for generative question answering.
Use an LLM to augment text dataset with all possible questions that can be asked to each document.
-
Fusion Retrieval 🔗
Optimizing search results by combining different retrieval methods.
Combine keyword-based search with vector-based search for more comprehensive and accurate retrieval.
-
Intelligent Reranking 📈
Applying advanced scoring mechanisms to improve the relevance ranking of retrieved results.
- 🧠 LLM-based Scoring: Use a language model to score the relevance of each retrieved chunk.
- 🔀 Cross-Encoder Models: Re-encode both the query and retrieved documents jointly for similarity scoring.
- 🏆 Metadata-enhanced Ranking: Incorporate metadata into the scoring process for more nuanced ranking.
- Relevance Revolution: How Re-ranking Transforms RAG Systems - A comprehensive blog post exploring the power of re-ranking in enhancing RAG system performance.
-
Multi-faceted Filtering 🔍
Applying various filtering techniques to refine and improve the quality of retrieved results.
- 🏷️ Metadata Filtering: Apply filters based on attributes like date, source, author, or document type.
- 📊 Similarity Thresholds: Set thresholds for relevance scores to keep only the most pertinent results.
- 📄 Content Filtering: Remove results that don't match specific content criteria or essential keywords.
- 🌈 Diversity Filtering: Ensure result diversity by filtering out near-duplicate entries.
-
Hierarchical Indices 🗂️
Creating a multi-tiered system for efficient information navigation and retrieval.
Implement a two-tiered system for document summaries and detailed chunks, both containing metadata pointing to the same location in the data.
- Hierarchical Indices: Enhancing RAG Systems - A comprehensive blog post exploring the power of hierarchical indices in enhancing RAG system performance.
-
Ensemble Retrieval 🎭
Combining multiple retrieval models or techniques for more robust and accurate results.
Apply different embedding models or retrieval algorithms and use voting or weighting mechanisms to determine the final set of retrieved documents.
-
Multi-modal Retrieval 📽️
Extending RAG capabilities to handle diverse data types for richer responses.
- Multi-model RAG with Multimedia Captioning - Caption and store all the other multimedia data like pdfs, ppts, etc., with text data in vector store and retrieve them together.
- Multi-model RAG with Colpali - Instead of captioning convert all the data into image, then find the most relevant images and pass them to a vision large language model.
-
Retrieval with Feedback Loops 🔁
Implementing mechanisms to learn from user interactions and improve future retrievals.
Collect and utilize user feedback on the relevance and quality of retrieved documents and generated responses to fine-tune retrieval and ranking models.
-
Adaptive Retrieval 🎯
Dynamically adjusting retrieval strategies based on query types and user contexts.
Classify queries into different categories and use tailored retrieval strategies for each, considering user context and preferences.
-
Iterative Retrieval 🔄
Performing multiple rounds of retrieval to refine and enhance result quality.
Use the LLM to analyze initial results and generate follow-up queries to fill in gaps or clarify information.
-
Performing evaluations Retrieval-Augmented Generation systems, by covering several metrics and creating test cases.
Use the
deepeval
library to conduct test cases on correctness, faithfulness and contextual relevancy of RAG systems. -
Evaluate the final stage of Retrieval-Augmented Generation using metrics of the GroUSE framework and meta-evaluate your custom LLM judge on GroUSE unit tests.
Use the
grouse
package to evaluate contextually-grounded LLM generations with GPT-4 on the 6 metrics of the GroUSE framework and use unit tests to evaluate a custom Llama 3.1 405B evaluator.
-
Explainable Retrieval 🔍
Providing transparency in the retrieval process to enhance user trust and system refinement.
Explain why certain pieces of information were retrieved and how they relate to the query.
-
Knowledge Graph Integration (Graph RAG) 🕸️
Incorporating structured data from knowledge graphs to enrich context and improve retrieval.
Retrieve entities and their relationships from a knowledge graph relevant to the query, combining this structured data with unstructured text for more informative responses.
-
GraphRag (Microsoft) 🎯
Microsoft GraphRAG (Open Source) is an advanced RAG system that integrates knowledge graphs to improve the performance of LLMs
• Analyze an input corpus by extracting entities, relationshipsfrom text units. generates summaries of each community and its constituents from the bottom-up.
-
RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval 🌳
Implementing a recursive approach to process and organize retrieved information in a tree structure.
Use abstractive summarization to recursively process and summarize retrieved documents, organizing the information in a tree structure for hierarchical context.
-
Self RAG 🔁
A dynamic approach that combines retrieval-based and generation-based methods, adaptively deciding whether to use retrieved information and how to best utilize it in generating responses.
• Implement a multi-step process including retrieval decision, document retrieval, relevance evaluation, response generation, support assessment, and utility evaluation to produce accurate, relevant, and useful outputs.
-
Corrective RAG 🔧
A sophisticated RAG approach that dynamically evaluates and corrects the retrieval process, combining vector databases, web search, and language models for highly accurate and context-aware responses.
• Integrate Retrieval Evaluator, Knowledge Refinement, Web Search Query Rewriter, and Response Generator components to create a system that adapts its information sourcing strategy based on relevance scores and combines multiple sources when necessary.
-
Sophisticated Controllable Agent for Complex RAG Tasks 🤖
An advanced RAG solution designed to tackle complex questions that simple semantic similarity-based retrieval cannot solve. This approach uses a sophisticated deterministic graph as the "brain" 🧠 of a highly controllable autonomous agent, capable of answering non-trivial questions from your own data.
• Implement a multi-step process involving question anonymization, high-level planning, task breakdown, adaptive information retrieval and question answering, continuous re-planning, and rigorous answer verification to ensure grounded and accurate responses.
To begin implementing these advanced RAG techniques in your projects:
- Clone this repository:
git clone https://github.com/NirDiamant/RAG_Techniques.git
- Navigate to the technique you're interested in:
cd all_rag_techniques/technique-name
- Follow the detailed implementation guide in each technique's directory.
We welcome contributions from the community! If you have a new technique or improvement to suggest:
- Fork the repository
- Create your feature branch:
git checkout -b feature/AmazingFeature
- Commit your changes:
git commit -m 'Add some AmazingFeature'
- Push to the branch:
git push origin feature/AmazingFeature
- Open a pull request
This project is licensed under a custom non-commercial license - see the LICENSE file for details.
⭐️ If you find this repository helpful, please consider giving it a star!
Keywords: RAG, Retrieval-Augmented Generation, NLP, AI, Machine Learning, Information Retrieval, Natural Language Processing, LLM, Embeddings, Semantic Search