Skip to content

Code for paper "CycleDiff: Cycle Diffusion Models for Unpaired Image-to-image Translation"

License

ZouShilong1024/CycleDiff

Repository files navigation

🌟 CycleDiff: CycleDiff: Cycle Diffusion Models for Unpaired Image-to-image Translation [arxiv paper arxiv paper]



🧭 Content

🛠 Installation

conda create -n cyclediff python=3.9 && conda activate CycleDiff
git clone https://github.com/ZouShilong1024/CycleDiff.git && cd CycleDiff
pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu117
pip install -r requirement.txt

🚀 Train CycleDiff from scratch

0. prepare dataset and the pretrained weight

The structure of the dataset should be as follows:

datasetA2B
|-- train
|   |-- class_A
|   |   |-- 0.png
|   |   |-- 1.png
|   |   |-- ...
|   |-- class_B
|   |   |-- 0.png
|   |   |-- 1.png
|   |   |-- ...
|-- test
|   |-- class_A
|   |   |-- 0.png
|   |   |-- 1.png
|   |   |-- ...
|   |-- class_B
|   |   |-- 0.png
|   |   |-- 1.png
|   |   |-- ...

Before starting training, 1. please modify the dataset paths in ./configs/{datasetA2B}/*.yaml. 2. download the pretrained weight at link and modify the ckpt_path on line 19 of ./configs/{datasetA2B}/*_ae_kl_256x256_d4.yaml.

1. train VAE

accelerate launch train_vae.py --cfg ./configs/{datasetA2B}/{class_A}_ae_kl_256x256_d4.yaml
accelerate launch train_vae.py --cfg ./configs/{datasetA2B}/{class_B}_ae_kl_256x256_d4.yaml

2. train ldm

accelerate launch train_uncond_ldm.py --cfg ./configs/{datasetA2B}/{class_A}_ddm_const4_ldm_unet6_114.yaml
accelerate launch train_uncond_ldm.py --cfg ./configs/{datasetA2B}/{class_B}_ddm_const4_ldm_unet6_114.yaml

3. train cycle translator

accelerate launch train_uncond_ldm_cycle.py --cfg ./configs/{datasetA2B}/translation_C_disc_timestep_ode_2.yaml

🔍 Test CycleDiff

accelerate launch translation_uncond_ldm_cycle.py --cfg ./configs/{datasetA2B}/translation_C_disc_timestep_ode_2.yaml

🙏 Acknowledgement

Our Code is based on ADM and CycleGAN.

📬 Contact

If you have some questions, please contact zoushilong@nudt.edu.cn.

📖 Citation

@article{zou2025cyclediff,
  title={CycleDiff: Cycle Diffusion Models for Unpaired Image-to-image Translation},
  author={Zou, Shilong and Huang, Yuhang and Yi, Renjiao and Zhu, Chenyang and Xu, Kai},
  journal={arXiv preprint arXiv:2508.06625},
  year={2025}
}