Skip to content

hemupadhyay26/infuse-ai

Repository files navigation

🧠 infuse-ai

infuse-ai is a CLI-based Retrieval-Augmented Generation (RAG) system that intelligently scans PDF documents and uses a Large Language Model (LLM) to provide highly relevant, source-backed answers.

📚 Ask questions about your documents. Get accurate, contextual, and cited responses — straight from your terminal.


✨ Features

  • 🔍 PDF-Based Retrieval: Extracts and indexes content from PDFs
  • 🧠 LLM-Powered Answers: Uses a large language model to generate contextual responses
  • 🧾 Source-Backed Results: Each response is linked to the relevant source content
  • 🛠️ CLI Tool: Simple command-line interface for querying your data
  • 🚀 Modular Architecture: Built with Node.js for extensibility and future integrations

📦 Tech Stack

  • Node.js – Core implementation
  • AWS Bedrock (Claude) – LLM backend
  • PDF Parser – To extract textual content from documents
  • Vector Search – Embedding and similarity matching (planned)

📁 Supported Input

  • PDF documents (more types coming soon: .txt, .csv, .docx, and web links)

💡 Use Case

  • Instantly query research papers, legal documents, manuals, or internal reports
  • Extract meaningful insights with traceable sources
  • Ideal for developers, researchers, and knowledge workers

🚀 Getting Started

Clone the Reposiory

git clone https://github.com/hemupadhyay26/infuse-ai.git
cd infuse-ai
yarn install
docker compose up -d
cp .env.example .env

paste the needed credential to .env file and start server

yarn db:migrate
yarn dev

About

Rag model for analyses the data provide by user and the use LLM to answer based on that

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published