conda create -n cyclediff python=3.9 && conda activate CycleDiff
git clone https://github.com/ZouShilong1024/CycleDiff.git && cd CycleDiff
pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu117
pip install -r requirement.txtThe structure of the dataset should be as follows:
datasetA2B
|-- train
| |-- class_A
| | |-- 0.png
| | |-- 1.png
| | |-- ...
| |-- class_B
| | |-- 0.png
| | |-- 1.png
| | |-- ...
|-- test
| |-- class_A
| | |-- 0.png
| | |-- 1.png
| | |-- ...
| |-- class_B
| | |-- 0.png
| | |-- 1.png
| | |-- ...
Before starting training, 1. please modify the dataset paths in
./configs/{datasetA2B}/*.yaml. 2. download the pretrained weight at link and modify theckpt_pathon line 19 of./configs/{datasetA2B}/*_ae_kl_256x256_d4.yaml.
accelerate launch train_vae.py --cfg ./configs/{datasetA2B}/{class_A}_ae_kl_256x256_d4.yaml
accelerate launch train_vae.py --cfg ./configs/{datasetA2B}/{class_B}_ae_kl_256x256_d4.yamlaccelerate launch train_uncond_ldm.py --cfg ./configs/{datasetA2B}/{class_A}_ddm_const4_ldm_unet6_114.yaml
accelerate launch train_uncond_ldm.py --cfg ./configs/{datasetA2B}/{class_B}_ddm_const4_ldm_unet6_114.yamlaccelerate launch train_uncond_ldm_cycle.py --cfg ./configs/{datasetA2B}/translation_C_disc_timestep_ode_2.yamlaccelerate launch translation_uncond_ldm_cycle.py --cfg ./configs/{datasetA2B}/translation_C_disc_timestep_ode_2.yamlOur Code is based on ADM and CycleGAN.
If you have some questions, please contact zoushilong@nudt.edu.cn.
@article{zou2025cyclediff,
title={CycleDiff: Cycle Diffusion Models for Unpaired Image-to-image Translation},
author={Zou, Shilong and Huang, Yuhang and Yi, Renjiao and Zhu, Chenyang and Xu, Kai},
journal={arXiv preprint arXiv:2508.06625},
year={2025}
}
