ggml
Here are 85 public repositories matching this topic...
Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any open-source language models, speech recognition models, and multimodal models, whether in the cloud, on-premises, or even on your laptop.
-
Updated
Oct 12, 2024 - Python
Stable Diffusion and Flux in pure C/C++
-
Updated
Sep 2, 2024 - C++
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
-
Updated
Aug 7, 2024 - C++
Calculate token/s & GPU memory requirement for any LLM. Supports llama.cpp/ggml/bnb/QLoRA quantization
-
Updated
Nov 4, 2023 - JavaScript
Port of MiniGPT4 in C++ (4bit, 5bit, 6bit, 8bit, 16bit CPU inference with GGML)
-
Updated
Aug 8, 2023 - C++
Whisper Dart is a cross platform library for dart and flutter that allows converting audio to text / speech to text / inference from Open AI models
-
Updated
Sep 18, 2024 - C++
CLIP inference in plain C/C++ with no extra dependencies
-
Updated
Aug 18, 2024 - C++
WIP Library Text To Speech From Suno AI's Bark in C/C++ for fast inference
-
Updated
Apr 13, 2024 - C++
GENERAL Ai Library For DART & Flutter
-
Updated
Apr 13, 2024 - C++
Inference Vision Transformer (ViT) in plain C/C++ with ggml
-
Updated
Apr 11, 2024 - C++
Run inference on replit-3B code instruct model using CPU
-
Updated
Jul 5, 2023 - Python
A ggml (C++) re-implementation of tortoise-tts
-
Updated
Aug 20, 2024 - C++
Improve this page
Add a description, image, and links to the ggml topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the ggml topic, visit your repo's landing page and select "manage topics."