ggml
Here are 122 public repositories matching this topic...
Swap GPT for any LLM by changing a single line of code. Xinference lets you run open-source, speech, and multimodal models on cloud, on-prem, or your laptop — all through one unified, production-ready inference API.
-
Updated
Nov 27, 2025 - Python
Diffusion model(SD,Flux,Wan,Qwen Image,...) inference in pure C/C++
-
Updated
Nov 22, 2025 - C++
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
-
Updated
Mar 23, 2025 - C++
Calculate token/s & GPU memory requirement for any LLM. Supports llama.cpp/ggml/bnb/QLoRA quantization
-
Updated
Dec 3, 2024 - JavaScript
Suno AI's Bark model in C/C++ for fast text-to-speech generation
-
Updated
Nov 16, 2024 - C++
This custom_node for ComfyUI adds one-click "Virtual VRAM" for any UNet and CLIP loader as well MultiGPU integration in WanVideoWrapper, managing the offload/Block Swap of layers to DRAM *or* VRAM to maximize the latent space of your card. Also includes nodes for directly loading entire components (UNet, CLIP, VAE) onto the device you choose
-
Updated
Oct 16, 2025 - Python
Port of MiniGPT4 in C++ (4bit, 5bit, 6bit, 8bit, 16bit CPU inference with GGML)
-
Updated
Aug 8, 2023 - C++
CLIP inference in plain C/C++ with no extra dependencies
-
Updated
Jun 19, 2025 - C++
Inference Vision Transformer (ViT) in plain C/C++ with ggml
-
Updated
Apr 11, 2024 - C++
A ggml (C++) re-implementation of tortoise-tts
-
Updated
Aug 20, 2024 - C++
Improve this page
Add a description, image, and links to the ggml topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the ggml topic, visit your repo's landing page and select "manage topics."