TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
-
Updated
Mar 6, 2026 - Python
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
A unified inference and post-training framework for accelerated video generation.
推荐/广告/搜索领域工业界经典以及最前沿论文集合。A collection of industry classics and cutting-edge papers in the field of recommendation/advertising/search.
The official repo for [NeurIPS'22] "ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation" and [TPAMI'23] "ViTPose++: Vision Transformer for Generic Body Pose Estimation"
Pytorch implementation of various Knowledge Distillation (KD) methods.
A PyTorch-based knowledge distillation toolkit for natural language processing
PaddleSlim is an open-source library for deep model compression and architecture search.
All-in-one training for vision models (YOLO, ViTs, RT-DETR, DINOv3): pretraining, fine-tuning, distillation.
Generate High-Quality Synthetics, Train, Measure, and Evaluate in a Single Pipeline
高质量中文预训练模型集合:最先进大模型、最快小模型、相似度专门模型
Kandinsky 5.0: A family of diffusion models for Video & Image generation
MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet without Tricks. In NeurIPS 2020 workshop.
⚡ Flash Diffusion ⚡: Accelerating Any Conditional Diffusion Model for Few Steps Image Generation (AAAI 2025 Oral)
NVIDIA FastGen: Fast Generation from Diffusion Models
Segmind Distilled diffusion
Reinforcement Learning via Self-Distillation (SDPO)
[ICLR 2026] rCM: SOTA JVP-Based Diffusion Distillation & Few-Step Video Generation & Scaling Up sCM/MeanFlow & Real-Time Autoregressive Video Diffusion
A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.
irresponsible innovation. Try now at https://chat.dev/
Add a description, image, and links to the distillation topic page so that developers can more easily learn about it.
To associate your repository with the distillation topic, visit your repo's landing page and select "manage topics."