Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts
-
Updated
Dec 16, 2025 - Python
Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts
Inpaint Anything extension performs stable diffusion inpainting on a browser UI using masks from Segment Anything.
Inpaint Anything performs stable diffusion inpainting on a browser UI using masks from Segment Anything.
Dreambooth implementation based on Stable Diffusion with minimal code.
[CVPR'25-Demo] Official repository of "TryOffDiff: Virtual-Try-Off via High-Fidelity Garment Reconstruction using Diffusion Models".
🤗 Unofficial huggingface/diffusers-based implementation of the paper "Training-Free Structured Diffusion Guidance for Compositional Text-to-Image Synthesis".
A simple web application that lets you replace any part of an image with an image generated based on your description.
Quantized stable-diffusion cutting down memory 75%, testing in streamlit, deploying in container
Easily create your own AI avatar images!
Toolchain for creating custom datasets and training Stable Diffusion (1.x, 2.x, XL) models and LoRAs
Collection of OSS models that are containerized into a serving container
Experimental demonstration for the Qwen/Qwen-Image-Edit-2511 model with lazy-loaded LoRA adapters supporting multi-image input editing. Users can upload one or more images (gallery format) and apply advanced edits such as pose transfer, anime conversion, or camera angle changes via natural language prompts. Features integrated Rerun SDK.
Experimental Stable Diffusion XL Webui
🤗 Diffusers meets ⚡ PyTorch-Lightning: A simple and flexible training template for diffusion models.
This project is a Streamlit-based web app that enables users to generate AI-generated images using text prompts. The app integrates two powerful image generation models: OpenAI's DALL-E and Huggingface's Diffusion models.
Generating synthetic Sino-Nom images with handwritten styles for OCR. Model source code and weights is taken from https://github.com/yeungchenwa/FontDiffuser.
A Gradio-based demonstration for the Tongyi-MAI/Z-Image-Turbo diffusion pipeline, enhanced with a curated collection of LoRAs (Low-Rank Adaptations) for style transfer and creative image generation. Users can select from pre-listed LoRAs or add custom ones from Hugging Face repositories.
Generate photo-realistic & high-resolution images by user-defined prompts using Flux-schnell, in PyTorch & Gradio
🖼️ Edit multiple images easily with this Gradio demo for the Qwen-Image-Edit-2511 model, featuring advanced edits and interactive comparisons.
TRELLIS.2-Text-to-3D is an end-to-end Text-to-3D and Image-to-3D generation app that enables users to create high-quality 3D GLB assets either by generating an image from a text prompt or by uploading an existing image, powered by Z-Image-Turbo and the TRELLIS.2 multi-stage pipeline.
Add a description, image, and links to the huggingface-diffusers topic page so that developers can more easily learn about it.
To associate your repository with the huggingface-diffusers topic, visit your repo's landing page and select "manage topics."