Clone the repo:
git https://github.com/pratapyash/local-rag-qa-engine
cd local-rag-qa-engineInstall the dependencies (requires Poetry):
poetry installFetch your LLM (llama3.2:1b by default):
ollama pull llama3.2:1bRun the Ollama server
ollama serveStart RagBase:
poetry run streamlit run app.pyExtracts text from PDF documents and creates chunks (using semantic and character splitter) that are stored in a vector databse
Given a query, searches for similar documents, reranks the result and applies LLM chain filter before returning the response.
Combines the LLM with the retriever to answer a given user question