text and image to video generation: CogVideoX (2024) and CogVideo (ICLR 2023)
-
Updated
Nov 4, 2025 - Python
text and image to video generation: CogVideoX (2024) and CogVideo (ICLR 2023)
Timestep Embedding Tells: It's Time to Cache for Video Diffusion Model
Official implementation for "RIFLEx: A Free Lunch for Length Extrapolation in Video Diffusion Transformers" (ICML 2025) and UltraViCo (ICLR 2026)
Finetuning and inference tools for the CogView4 and CogVideoX model series.
CogVideoX-5B 4-bit quantization model
(Windows/Linux/MacOS) Local WebUI with neural network models (Text, Image, Video, 3D, Audio) on python (Gradio interface). Translated on 3 languages
Gradio UI for training video models using finetrainers
A comprehensive, click to install, fully open-source, Video + Audio Generation AIO Toolkit using advanced prompt engineering plus the power of CogVideox + AudioLDM2 + Python!
Accelerate Video Diffusion Inference via Sketching-Rendering Cooperation
[ICLR 2026] This is the official PyTorch implementation of "QVGen: Pushing the Limit of Quantized Video Generative Models".
Docker wrapper for CogVideo
Multi-Model Text to Video Generation using Gen AI: ModelScope, CogVideoX, custom DiT pipeline, and large-scale video data on GCP.
Official repository for Tio Magic Animation Toolkit
Add a description, image, and links to the cogvideox topic page so that developers can more easily learn about it.
To associate your repository with the cogvideox topic, visit your repo's landing page and select "manage topics."