[arXiv] [Project Page] [Dataset]
by Dohyun Kim, Sehwan Park, Geonhee Han, Seung Wook Kim, Paul Hongsuck Seo
This is the official repository for our CVPR 2025 paper: Random Conditioning with Distillation for Data-Efficient Diffusion Model Compression. We propose a novel random conditioning strategy to enable image-free, efficient knowledge distillation of conditional diffusion models.
Our code builds on top of BKSDM.
We propose Random Conditioning, a technique that pairs noised images with randomly selected text prompts to enable student diffusion models to generalize beyond the limited concept space of training data. This allows effective compression of large diffusion models without requiring large-scale paired datasets.
For further details, please check out our paper and our project page.
conda create -n rand-cond python=3.8
conda activate rand-cond
git clone https://github.com/dohyun-as/Random-Conditioning.git
cd Random-Conditioning
pip install -r requirements.txt
Make sure to install PyTorch compatible with your CUDA version from https://pytorch.org.
Our implementation is based on BK-SDM. We thank the authors for their open-source contributions.
If you have any questions, feel free to email Dohyun (a12s12@korea.ac.kr). If you come across any issues or bugs while using the code, you can open an issue. Please provide detailed information about the problem so we can assist you more efficiently!
If you use our code or findings, please cite:
@InProceedings{Kim_2025_CVPR,
author = {Kim, Dohyun and Park, Sehwan and Han, Geonhee and Kim, Seung Wook and Seo, Paul Hongsuck},
title = {Random Conditioning for Diffusion Model Compression with Distillation},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
year = {2025},
pages = {18607-18618}
}