🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.
-
Updated
Sep 20, 2025 - Python
🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.
Auto1111 extension implementing text2video diffusion models (like ModelScope or VideoCrafter) using only Auto1111 webui dependencies
PALLAIDIUM — a generative AI movie studio, seamlessly integrated into the Blender Video Editor, enabling end-to-end production from script to screen and back.
Finetune ModelScope's Text To Video model using Diffusers 🧨
Official Pytorch Implementation for "Space-Time Diffusion Features for Zero-Shot Text-Driven Motion Transfer""
Craft your visions
OpenVideo specializes in the domain of text-to-video generation, with the goal of providing high-quality and diverse video datasets to AI researchers globally.
Inference pipeline for some Text-to-Image metrics.
Implementation of DiffusionOverDiffusion architecture presented in NUWA-XL in a form of ControlNet-like module on top of ModelScope text2video model for extremely long video generation.
A modified version of vid2vid for Speech2Video, Text2Video Paper
Automated video creation from a single prompt.
A frame2frame, video2video Video Editor based on the stable-diffusion
Add a description, image, and links to the text2video topic page so that developers can more easily learn about it.
To associate your repository with the text2video topic, visit your repo's landing page and select "manage topics."