llama-cpp
Here are 94 public repositories matching this topic...
Maid is a cross-platform Flutter app for interfacing with GGUF / llama.cpp models locally, and with Ollama and OpenAI models remotely.
-
Updated
Nov 4, 2024 - Dart
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
-
Updated
Oct 31, 2024 - TypeScript
LLama.cpp rust bindings
-
Updated
Jun 27, 2024 - Rust
This repo is to showcase how you can run a model locally and offline, free of OpenAI dependencies.
-
Updated
Jul 12, 2024 - Python
Local ML voice chat using high-end models.
-
Updated
Nov 12, 2024 - C++
Making offline AI models accessible to all types of edge devices.
-
Updated
Feb 12, 2024 - Dart
workbench for learing&practising AI tech in real scenario on Android device, powered by GGML(Georgi Gerganov Machine Learning) and NCNN(Tencent NCNN) and FFmpeg
-
Updated
Jun 17, 2024 - C++
LLaMA Server combines the power of LLaMA C++ with the beauty of Chatbot UI.
-
Updated
Jun 10, 2023 - Python
BabyAGI-🦙: Enhanced for Llama models (running 100% local) and persistent memory, with smart internet search based on BabyCatAGI and document embedding in langchain based on privateGPT
-
Updated
Jun 4, 2023 - Python
Improve this page
Add a description, image, and links to the llama-cpp topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the llama-cpp topic, visit your repo's landing page and select "manage topics."