Private & local AI personal knowledge management app for high entropy people.
-
Updated
May 13, 2025 - JavaScript
Private & local AI personal knowledge management app for high entropy people.
Text-To-Speech, RAG, and LLMs. All local!
Calculate token/s & GPU memory requirement for any LLM. Supports llama.cpp/ggml/bnb/QLoRA quantization
A simple "Be My Eyes" web app with a llama.cpp/llava backend
A simple NPM interface for seamlessly interacting with 36 Large Language Model (LLM) providers, including OpenAI, Anthropic, Google Gemini, Cohere, Hugging Face Inference, NVIDIA AI, Mistral AI, AI21 Studio, LLaMA.CPP, and Ollama, and hundreds of models.
llama.cpp gguf file parser for javascript
A frontend for large language models like 🐨 Koala or 🦙 Vicuna running on CPU with llama.cpp, using the API server library provided by llama-cpp-python. NOTE: I had to discontinue this project because its maintenance takes more time than I can and want to invest. Feel free to fork :)
Copilot hack for running local copilot without auth and proxying
Self-hosted chat UI for running Alpaca models locally, built with MERN stack and based on llama.cpp
Messenger-like AI chat app that can run locally using Llama cpp and Stable Diffusion.
An open-source AI app | running mixtral 8x7B / llama.cpp | single-layer threads interface | multi-user | private | offline capable
Aplicación web de cámara en tiempo real que utiliza llama.cpp y SmolVLM para detección visual y respuestas automáticas en español o inglés. Incluye historial exportable, interfaz moderna con Bulma, configuración centralizada y soporte multilingüe.
Add a description, image, and links to the llamacpp topic page so that developers can more easily learn about it.
To associate your repository with the llamacpp topic, visit your repo's landing page and select "manage topics."