This is a project developed as a practical exercise for the Agents course by Hugging Face.
The main goal is to create an AI agent capable of supporting the learning process through Retrieval-Augmented Generation (RAG), leveraging LlamaIndex to ingest, index, and query personal study documents.
- β Practice concepts from the Hugging Face Agents course.
- π Build an intelligent assistant that can answer questions based on personal documents.
- π€ Integrate RAG, LlamaIndex, ChromaDB, and Ollama into a seamless workflow.
- π Enhance study effectiveness by turning static notes into an interactive tool.
- LlamaIndex β document ingestion, indexing, and retrieval
- ChromaDB β persistent vector store
- Ollama β local LLMs for fast, private responses
- Hugging Face Embeddings (BAAI/bge-small-en-v1.5)
- Python 3.10+
- Documents are loaded from a specified folder.
- They are split into sentences and converted into embeddings.
- The embeddings are stored in ChromaDB.
- You can ask questions in natural language and get context-aware responses powered by a local LLM.