Skip to content

Official Implementation of CVPR 2025 paper "Random Conditioning with Distillation for Data-Efficient Diffusion Model Compression"

License

Notifications You must be signed in to change notification settings

dohyun-as/Random-Conditioning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

korea miil nvidia

Random Conditioning with Distillation for Data-Efficient Diffusion Model Compression (CVPR 2025)

[arXiv] [Project Page] [Dataset]

by Dohyun Kim, Sehwan Park, Geonhee Han, Seung Wook Kim, Paul Hongsuck Seo

This is the official repository for our CVPR 2025 paper: Random Conditioning with Distillation for Data-Efficient Diffusion Model Compression. We propose a novel random conditioning strategy to enable image-free, efficient knowledge distillation of conditional diffusion models.

Our code builds on top of BKSDM.

Overview

Figure

We propose Random Conditioning, a technique that pairs noised images with randomly selected text prompts to enable student diffusion models to generalize beyond the limited concept space of training data. This allows effective compression of large diffusion models without requiring large-scale paired datasets.

For further details, please check out our paper and our project page.

Installation

conda create -n rand-cond python=3.8
conda activate rand-cond
git clone https://github.com/dohyun-as/Random-Conditioning.git
cd Random-Conditioning
pip install -r requirements.txt

Make sure to install PyTorch compatible with your CUDA version from https://pytorch.org.

Data Preparation

Training

Evaluation

Acknowledgement

Our implementation is based on BK-SDM. We thank the authors for their open-source contributions.

Bug or Questions?

If you have any questions, feel free to email Dohyun (a12s12@korea.ac.kr). If you come across any issues or bugs while using the code, you can open an issue. Please provide detailed information about the problem so we can assist you more efficiently!

Citation

If you use our code or findings, please cite:

@InProceedings{Kim_2025_CVPR,
    author    = {Kim, Dohyun and Park, Sehwan and Han, Geonhee and Kim, Seung Wook and Seo, Paul Hongsuck},
    title     = {Random Conditioning for Diffusion Model Compression with Distillation},
    booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
    month     = {June},
    year      = {2025},
    pages     = {18607-18618}
}

About

Official Implementation of CVPR 2025 paper "Random Conditioning with Distillation for Data-Efficient Diffusion Model Compression"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published