Skip to content

🩺 AI Medical Bot is a smart chatbot that answers medical questions using uploaded biomedical texts, powered by Llama3, LangChain, hfAPI and Pinecone.

License

Notifications You must be signed in to change notification settings

BleeGleeWee/AI-Bot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

37 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

AI-Bot 🩺

An intelligent AI-powered medical chatbot designed to answer health-related queries using uploaded bio-medical encyclopedias and domain-specific knowledge. Built using Llama-3 (via Groq), Hugging Face Inference API, LangChain, Pinecone vector database, and Flask, this bot provides accurate, ultra-fast, and context-aware responses to user queries.

πŸ”΄ Live Demo: ✨️


πŸš€ Features

  • Natural Language Medical Query Understanding: Capable of interpreting complex medical questions.
  • Ultra-Fast Inference: Powered by Llama-3 via Groq for near-instant responses.
  • RAG Architecture: Uses Retrieval-Augmented Generation to ground answers in verified medical texts.
  • Cost-Efficient Embeddings: Utilizes Hugging Face Inference API for lightweight, cloud-based embeddings (No heavy local download).
  • Vector Search: efficient document retrieval using Pinecone.
  • Seamless Cloud Deployment: deployed live on Render.

πŸ›  Tech Stack

  • Language: Python 3.10
  • Framework: Flask
  • Orchestration: LangChain
  • LLM: Llama-3 (via Groq API)
  • Embeddings: Sentence-Transformers (via Hugging Face Inference Client)
  • Vector Database: Pinecone
  • Deployment: Render

πŸ“˜ Use Case

Uploaded "Gale Encyclopedia of Medicine" (bio-medical) books (as PDFs) so that users can interact with the bot to get accurate medical insights, references, and intelligent answers grounded in trusted data sources rather than generic AI hallucinations.


πŸ’» How to run locally?

STEPS:

1. Clone the repository

git clone https://github.com/BleeGleeWee/AI-Bot.git
cd AI-Bot
  1. Create a conda environment
conda create -n AiBot python=3.10 -y
conda activate AiBot
  1. Install the requirements
pip install -r requirements.txt
  1. Setup Environment Variables Create a .env file in the root directory and add your credentials. (Note: You need API keys from Groq, Hugging Face, and Pinecone)
PINECONE_API_KEY = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
GROQ_API_KEY = "gsk_xxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
HUGGINGFACEHUB_API_TOKEN = "hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
  1. Ingest Data (Create Embeddings) Run this only once to process your PDFs and store vectors in Pinecone:
python store_index.py
  1. Run the Application
python app.py
  1. Access the Chatbot Open your browser and go to:
http://localhost:8080

🌐 Deployment (Render)

This project is currently deployed on Render as a Web Service.

Deployment Steps:

  • Push to GitHub: Ensure your latest code (with requirements.txt and Procfile) is on GitHub.
  • Create New Web Service: Log in to Render and connect your GitHub repository.
  • Configure Settings:
    • Runtime: Python 3
    • Build Command: pip install -r requirements.txt
    • Start Command: gunicorn app:app
  • Environment Variables: Add the following secrets in the "Environment" tab on Render:
    • PYTHON_VERSION: 3.10.12
    • PINECONE_API_KEY: (Your Key)
    • GROQ_API_KEY: (Your Key)
    • HUGGINGFACEHUB_API_TOKEN: (Your Key)
  • Deploy: Click "Manual Deploy" -> "Clear build cache & deploy" to go live.

πŸ“‚ Directory Structure

AI-Bot/
β”œβ”€β”€ Data/                   # PDF files for knowledge base
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ helper.py           # Embedding & PDF loading logic
β”‚   β”œβ”€β”€ prompt.py           # System prompts for Llama-3
β”œβ”€β”€ templates/
β”‚   └── chat.html           # Frontend UI
β”œβ”€β”€ static/
β”‚   └── style.css           # Styling
β”œβ”€β”€ app.py                  # Main Flask application
β”œβ”€β”€ store_index.py          # Script to ingest data into Pinecone
β”œβ”€β”€ requirements.txt        # Project dependencies
β”œβ”€β”€ Procfile                # Deployment command for Render
└── .env                    # API Secrets (Not committed to Git)

🀝 Contribution

We welcome contributions to improve the AI Medical Chatbot! Whether it's fixing bugs, improving documentation, or adding new features, your help is appreciated.

Steps to Contribute:

  1. Fork the repository.
  2. Clone your forked repo:
    git clone https://github.com/BleeGleeWee/AI-Bot.git
  3. Create a new branch for your feature or fix:
    git checkout -b feature-name
  4. Make your changes and commit them:
    git commit -m "Added a cool new feature"
  5. Push to your fork:
    git push origin feature-name
  6. Open a Pull Request (PR) on the main repository.

⭐ Support

If you found this project helpful or interesting, please consider giving it a Star! 🌟, helps others discover this project and motivates me to keep improving it.


About

🩺 AI Medical Bot is a smart chatbot that answers medical questions using uploaded biomedical texts, powered by Llama3, LangChain, hfAPI and Pinecone.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published