Skip to content

Latest commit

 

History

History
 
 

local_rag_agent

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 

🦙 Local RAG Agent with Llama 3.2

This application implements a Retrieval-Augmented Generation (RAG) system using Llama 3.2 via Ollama, with Qdrant as the vector database.

Features

  • Fully local RAG implementation
  • Powered by Llama 3.2 through Ollama
  • Vector search using Qdrant
  • Interactive playground interface
  • No external API dependencies

How to get Started?

  1. Clone the GitHub repository
git clone https://github.com/Shubhamsaboo/awesome-llm-apps.git
  1. Install the required dependencies:
cd rag_tutorials/local_rag_agent
pip install -r requirements.txt
  1. Install and start Qdrant vector database locally
docker pull qdrant/qdrant
docker run -p 6333:6333 qdrant/qdrant
  1. Install Ollama and pull Llama 3.2 for LLM and OpenHermes as the embedder for OllamaEmbedder
ollama pull llama3.2
ollama pull openhermes
  1. Run the AI RAG Agent
python local_rag_agent.py
  1. Open your web browser and navigate to the URL provided in the console output to interact with the RAG agent through the playground interface.