Official Tensorflow implementation for reproducing the results of Demystifying MMD GANs.
The repository contains code for reproducing experiments of uncoditional image generation with MMD GANs and other benchmark GAN models.
If you're only interested in the new KID metric, check out compute_scores.py
.
Mikołaj Bińkowski, Dougal J. Sutherland, Michael N. Arbel and Athur Gretton. Demystifying MMD GANs. ICLR 2018 (openreview; poster).
- Uses gradient penalty, analoguous to WGAN-GP (Gulrajani et al Improved Training of Wassersein GAN.
- Evaluates models using three different methods: Inception Score, Fréchet Inception Distance (FID), and proposed Kernel Inception Distance (KID).
- Adaptively decreses the learning rate using 3-sample test. If KID does not improve (as compared to evaluation 20k steps earlier) three times in a row, learning rate is halved.
- python >= 3.6
- tensorflow-gpu >= 1.3
- PIL, lmdb, numpy, matplotlib
- machine with GPU(s). At least 2 GPUs are needed for experiments with Celeb-A dataset.
The code works with several common datasets with different resolutions. The experiments include
- 28x28 MNIST,
- 32x32 Cifar10,
- 64x64 LSUN Bedrooms,
- 160x160 Celeb-A.
LSUN, MNIST and Celeb-A datasets can be downloaded using the script.
We compare MMD GANs with WGAN-GP and Cramer GAN.
Each of the following scripts launches the training of MMD GAN on respective dataset: mnist.sh
, cifar10.sh
, lsun.sh
, celeba.sh
. To train the benchmark models, change the variable $MODEL
to WGAN
or CRAMER
. To train all three models set $MODEL=ALL
.
Feel free to contact Mikołaj Bińkowski (mikbinkowski at gmail.com
) with any
questions and issues.