Ruiqi Gao, Yang Song, Ben Poole, Ying Nian Wu, Diederik P. Kingma
Paper: https://arxiv.org/pdf/2012.08125
Experiments can be run on a single GPU or Google Cloud TPU v3-8. Requires python >= 3.5. To install dependencies:
pip install -r requirements.txt
To compute FID/inception scores, download the pre-computed statistics of datasets from: https://drive.google.com/file/d/1QOLyYHESflcdZu8CsBLZohZzC95HyukK/view?usp=sharing, unzip the file and put the folder in this repo.
python main.py --num_res_blocks=8 --n_batch_train=256
python main.py --problem=celeba --num_res_blocks=6 --beta_1=0.5 --batch_size=128
python main.py --problem=[lsun_church64/lsun_bedroom64] --batch_size=128
python main.py --problem=lsun_church128 --beta_1=0.5
python main.py --problem=lsun_bedroom128 --beta_1=0.5 --num_res_blocks=5
python main.py --eval --num_res_blocks=8 --noise_scale=0.99 --fid_n_batch=2000
For faster training, reduce the value of num_res_blocks
.
Add --tpu=True
to the above scripts for 1 GPU. Also need to set --tpu_name
and --tpu_zone
as shown in Google Cloud.
https://drive.google.com/file/d/1eneA6T5jQIyVFLFSOrSfJvDeUJJMh9xk/view?usp=sharing
This code is for T6 setting. Will upload T1k setting soon!
If you find our work helpful to your research, please cite:
@article{gao2020learning,
title={Learning Energy-Based Models by Diffusion Recovery Likelihood},
author={Gao, Ruiqi and Song, Yang and Poole, Ben and Wu, Ying Nian and Kingma, Diederik P},
journal={arXiv preprint arXiv:2012.08125},
year={2020}
}