A framework for using local LLMs (Qwen2.5-coder 7B) that are fine-tuned using RL to generate, debug, and optimize code solutions through iterative refinement.
-
Updated
Mar 14, 2025 - Python
A framework for using local LLMs (Qwen2.5-coder 7B) that are fine-tuned using RL to generate, debug, and optimize code solutions through iterative refinement.
A fully customizable, super light-weight, cross-platform GenAI based Personal Assistant that can be run locally on your private hardware!
🤖 An Intelligent Chatbot: Powered by the locally hosted Ollama 3.2 LLM 🧠 and ChromaDB 🗂️, this chatbot offers semantic search 🔍, session-aware responses 🗨️, and an interactive Streamlit interface 🎨 for seamless user interaction. 🚀
🖼️ Python Image and 🎥 Video Generator using LLM providers and models — built in Claude Code 💻 CLI, Codex CLI 📝, Gemini CLI 🌌, and others — FREE & Open-Source forever 🚀
An advanced, fully local, and GPU-accelerated RAG pipeline. Features a sophisticated LLM-based preprocessing engine, state-of-the-art Parent Document Retriever with RAG Fusion, and a modular, Hydra-configurable architecture. Built with LangChain, Ollama, and ChromaDB for 100% private, high-performance document Q&A.
An AI-powered assistant to streamline knowledge management, member discovery, and content generation across Telegram and Twitter, while ensuring privacy with local LLM deployment.
An autonomous AI agent for intelligently updating, maintaining, and curating a LightRAG knowledge base.
Python CLI/TUI for intelligent media file organization. Features atomic operations, rollback safety, and integrity checks, with a local LLM workflow for context-aware renaming and categorization from API-sourced metadata.
PlantDeck is an offline herbal RAG that indexes your PDF books and monographs, extracts text/images with OCR, and answers questions with page-level citations using a local LLM via Ollama. Runs on your machine; no cloud. Field guide only; not medical advice.
**Ask CLI** is a command-line tool for interacting with a local LLM (Large Language Model) server. It allows you to send queries and receive concise command-line responses.
WoolyChat - open-source AI chat app for locally hosted Ollama models. Written in Flask/JavaScript.
AI-powered code and idea assistant for developers: local-first, doc-aware, and fully test-automated.
Local Retrieval-Augmented Generation (RAG) pipeline using LangChain and ChromaDB to query PDF files with LLMs.
AI-powered Bash terminal built with Python, Tkinter, tkterm, using local LLM through LMStudio for natural language command generation; features whitelist/blacklist management, intuitive interface.
A project to design, test, and optimize prompts for AI models to improve output quality and relevance.
A modular AI assistant ecosystem with voice/text interfaces, RAG capabilities, command execution, and integrated applications for radio streaming, web browsing, and document editing.
Add a description, image, and links to the local-llm-integration topic page so that developers can more easily learn about it.
To associate your repository with the local-llm-integration topic, visit your repo's landing page and select "manage topics."