Skip to content

AkhileshMishra/SampleLLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RAG System

A Retrieval-Augmented Generation (RAG) system that allows users to upload documents to a knowledge base, configure LLM settings, and query the knowledge base through a chatbot interface.

Features

  • Document Management: Upload and manage PDF, Word, Excel, and image files in your knowledge base
  • LLM Configuration: Configure LLM details like model, API key, and parameters through the UI
  • RAG Chatbot: Query your knowledge base using state-of-the-art language models

Architecture

This system uses a modern architecture with the following components:

  • Backend: FastAPI server with LangChain integration
  • Frontend: HTML5, JavaScript, and CSS
  • Vector Database: ChromaDB for efficient similarity search
  • LLM Integration: Support for Llama3 and Gemini models

Installation

Prerequisites

  • Python 3.10+
  • Tesseract OCR (for image processing)

Setup

  1. Clone the repository:

    git clone https://github.com/AkhileshMishra/SampleLLM.git
    cd SampleLLM
    
  2. Run the deployment script:

    chmod +x scripts/deploy.sh
    ./scripts/deploy.sh
    

    This will:

    • Create a virtual environment
    • Install all required dependencies
    • Set up necessary directories
    • Start the backend server
  3. In a separate terminal, start the frontend server:

    chmod +x scripts/serve_frontend.sh
    ./scripts/serve_frontend.sh
    
  4. Access the application at http://localhost:8080

Usage

Configuring LLM

  1. Navigate to the "LLM Configuration" tab
  2. Select your preferred model (Llama3 or Gemini)
  3. Enter your API key and API URL
  4. Adjust parameters like temperature and max tokens
  5. Save your configuration

Managing Documents

  1. Navigate to the "Knowledge Base" tab
  2. Use the file upload form to add documents to your knowledge base
  3. View and manage your uploaded documents in the table below

Chatting with Your Documents

  1. Navigate to the "Chat" tab
  2. Type your question in the input field
  3. The system will retrieve relevant information from your documents and generate a response

API Documentation

Document Management

  • POST /api/documents/upload - Upload document to knowledge base
  • GET /api/documents - List all documents in knowledge base
  • DELETE /api/documents/{document_id} - Remove document from knowledge base
  • GET /api/documents/stats - Get vector store statistics

LLM Configuration

  • POST /api/config/llm - Set LLM configuration
  • GET /api/config/llm - Get current LLM configuration

Chat Interface

  • POST /api/chat - Send query and get response
  • GET /api/chat/history - Get chat history

Testing

Run the test script to verify all components are working correctly:

chmod +x scripts/test.sh
./scripts/test.sh

Project Structure

SampleLLM/
├── backend/
│   ├── api/
│   │   ├── chat_routes.py
│   │   ├── config_routes.py
│   │   └── document_routes.py
│   ├── database/
│   │   └── vector_store_manager.py
│   ├── models/
│   │   ├── llm_service.py
│   │   └── rag_chatbot.py
│   ├── utils/
│   │   └── document_processor.py
│   └── app.py
├── frontend/
│   ├── css/
│   │   └── styles.css
│   ├── js/
│   │   └── main.js
│   └── index.html
├── scripts/
│   ├── deploy.sh
│   ├── serve_frontend.sh
│   └── test.sh
├── data/
│   ├── config/
│   ├── documents/
│   └── vector_store/
└── README.md

License

This project is open source and available under the MIT License.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published