An intelligent AI-powered medical chatbot designed to answer health-related queries using uploaded bio-medical encyclopedias and domain-specific knowledge. Built using Llama-3 (via Groq), Hugging Face Inference API, LangChain, Pinecone vector database, and Flask, this bot provides accurate, ultra-fast, and context-aware responses to user queries.
π΄ Live Demo: β¨οΈ
- Natural Language Medical Query Understanding: Capable of interpreting complex medical questions.
- Ultra-Fast Inference: Powered by Llama-3 via Groq for near-instant responses.
- RAG Architecture: Uses Retrieval-Augmented Generation to ground answers in verified medical texts.
- Cost-Efficient Embeddings: Utilizes Hugging Face Inference API for lightweight, cloud-based embeddings (No heavy local download).
- Vector Search: efficient document retrieval using Pinecone.
- Seamless Cloud Deployment: deployed live on Render.
- Language: Python 3.10
- Framework: Flask
- Orchestration: LangChain
- LLM: Llama-3 (via Groq API)
- Embeddings: Sentence-Transformers (via Hugging Face Inference Client)
- Vector Database: Pinecone
- Deployment: Render
Uploaded "Gale Encyclopedia of Medicine" (bio-medical) books (as PDFs) so that users can interact with the bot to get accurate medical insights, references, and intelligent answers grounded in trusted data sources rather than generic AI hallucinations.
1. Clone the repository
git clone https://github.com/BleeGleeWee/AI-Bot.git
cd AI-Bot- Create a conda environment
conda create -n AiBot python=3.10 -y
conda activate AiBot- Install the requirements
pip install -r requirements.txt- Setup Environment Variables Create a .env file in the root directory and add your credentials. (Note: You need API keys from Groq, Hugging Face, and Pinecone)
PINECONE_API_KEY = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
GROQ_API_KEY = "gsk_xxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
HUGGINGFACEHUB_API_TOKEN = "hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxx"- Ingest Data (Create Embeddings) Run this only once to process your PDFs and store vectors in Pinecone:
python store_index.py- Run the Application
python app.py- Access the Chatbot Open your browser and go to:
http://localhost:8080This project is currently deployed on Render as a Web Service.
Deployment Steps:
- Push to GitHub: Ensure your latest code (with requirements.txt and Procfile) is on GitHub.
- Create New Web Service: Log in to Render and connect your GitHub repository.
- Configure Settings:
- Runtime: Python 3
- Build Command: pip install -r requirements.txt
- Start Command: gunicorn app:app
- Environment Variables:
Add the following secrets in the "Environment" tab on Render:
- PYTHON_VERSION: 3.10.12
- PINECONE_API_KEY: (Your Key)
- GROQ_API_KEY: (Your Key)
- HUGGINGFACEHUB_API_TOKEN: (Your Key)
- Deploy: Click "Manual Deploy" -> "Clear build cache & deploy" to go live.
AI-Bot/
βββ Data/ # PDF files for knowledge base
βββ src/
β βββ helper.py # Embedding & PDF loading logic
β βββ prompt.py # System prompts for Llama-3
βββ templates/
β βββ chat.html # Frontend UI
βββ static/
β βββ style.css # Styling
βββ app.py # Main Flask application
βββ store_index.py # Script to ingest data into Pinecone
βββ requirements.txt # Project dependencies
βββ Procfile # Deployment command for Render
βββ .env # API Secrets (Not committed to Git)
We welcome contributions to improve the AI Medical Chatbot! Whether it's fixing bugs, improving documentation, or adding new features, your help is appreciated.
Steps to Contribute:
- Fork the repository.
- Clone your forked repo:
git clone https://github.com/BleeGleeWee/AI-Bot.git
- Create a new branch for your feature or fix:
git checkout -b feature-name
- Make your changes and commit them:
git commit -m "Added a cool new feature" - Push to your fork:
git push origin feature-name
- Open a Pull Request (PR) on the main repository.
If you found this project helpful or interesting, please consider giving it a Star! π, helps others discover this project and motivates me to keep improving it.