Simple, scalable AI model deployment on GPU clusters
-
Updated
Jun 16, 2025 - Python
Simple, scalable AI model deployment on GPU clusters
Olares: An Open-Source Personal Cloud to Reclaim Your Data
Maid is a cross-platform Flutter app for interfacing with GGUF / llama.cpp models locally, and with Ollama and OpenAI models remotely.
MLX-VLM is a package for inference and fine-tuning of Vision Language Models (VLMs) on your Mac using MLX.
MemoryCache is an experimental development project to turn a local desktop environment into an on-device AI agent
🦙 Ollama Telegram bot, with advanced configuration
Shinkai is a two click install App that allows you to create Local AI agents in 5 minutes or less using a simple UI. Supports: MCPs, Remote and Local AI, Crypto and Payments.
Like ChatGPT's voice conversations with an AI, but entirely offline/private/trade-secret-friendly, using local AI models such as LLama 2 and Whisper
Blueprint by Mozilla.ai for generating podcasts from documents using local AI
Open AI platform in your local. Supports Ollama, OpenRouter, Gemini, OpenAI, Deepseek etc.
MVP of an idea using multiple local LLM models to simulate and play D&D
A flexible free and unlimited PDF Translator for Human with Local-LLM or ChatGPT
🤖 AI-powered macOS automation framework - Control your Mac with natural language using GPT models. No code needed, just English instructions!
Extract structured data from local or remote LLM models
VerbalCodeAI is a free, open-source AI tool that simplifies codebase navigation in your terminal, using Python 3.11.6 and Ollama. It indexes projects locally and quickly answers questions like “Where’s this function?”, saving developers time on debugging and onboarding.
Blueprint by Mozilla.ai for finetuning a Speech-To-Text model in your own language
Empower Your Productivity with Local AI Assistants
Local Generative AI
Add a description, image, and links to the local-ai topic page so that developers can more easily learn about it.
To associate your repository with the local-ai topic, visit your repo's landing page and select "manage topics."