Light Image Video Generation Inference Framework
-
Updated
Feb 11, 2026 - Python
Light Image Video Generation Inference Framework
Open-Higgsfield AI: An open-source, local clone of Higgsfield's AI Studio. Generate high-quality AI video and images using Muapi. Features text-to-video (T2V), image-to-video (I2V), and advanced camera controls.
Official codebase for "Causal Forcing: Autoregressive Diffusion Distillation Done Right for High-Quality Real-Time Interactive Video Generation"
Reinforcement Learning Framework for Visual Generation
ComfyUI workflows to create smooth transitions between video clips using Wan VACE. Works with video from any model or other source-LTX-2, drone footage, stock video, personal recordings, etc.
Accelerate Video Diffusion Inference via Sketching-Rendering Cooperation
KsanaDiT: High-Performance DiT (Diffusion Transformer) Inference Framework for Video & Image Generation
This allows you to utilize Google Colab (or other notebooks) to run wan2.2 image to video
Custom tool set mostly for Hunyuan Video, but includes some WAN and FramePack Video nodes.
Avernus API server and client for LLM, SDXL, FLUX, Qwen-Image, WAN, and Ace-Step inference and many other architectures.
⚡️ Generate light video content effortlessly with LightX2V, a streamlined framework for fast and efficient video generation and inference.
🎥 Generate videos from images effortlessly with the Wan 2.2 Google Colab template, featuring one-click setup and optimized workflows for free GPU use.
Add a description, image, and links to the wan-video topic page so that developers can more easily learn about it.
To associate your repository with the wan-video topic, visit your repo's landing page and select "manage topics."