Common generative adversarial networks (GANs) implemented in TensorFlow 2.4.1. The GANs are suitable for image-to-image translation tasks.
The repository was published as a part of the master's thesis (Generative Adversarial Networks Applied for Privacy Preservation in Biometric-Based Authentication and Identification). Preliminary results were presented at http://excel.fit.vutbr.cz/submissions/2021/031/31.pdf.
The following architectures are implemented:
- TraVeLGAN (https://github.com/KrishnaswamyLab/travelgan)
- DiscoGAN (https://github.com/SKTBrain/DiscoGAN)
- GcGAN (https://github.com/hufu6371/GcGAN)
- Clone this repository:
git clone https://github.com/lubosmj/I2I-GANs && cd I2I-GANs
- Create a new virtual environment:
python3 -m venv venv source source venv/bin/activate
- Install the packages:
python3 setup.py install
- Use the installed modules in your application:
from i2i_gans import TraVeLGAN travelgan = TraVeLGAN(...) travelgan.compile() travelgan.load_weights(...) fake_images = travelgan.generator(...)
- Train a new TraVeLGAN model:
python3 -m examples.travelgan_trainer train --domain_A "path/to/dataset/A/*.png" --domain_B "path/to/dataset/B/*.png" --dataset_size 5000 --batch_size=16 --checkpoints_freq 10 --parallel --samples_freq 10 --samples_dir samples --checkpoints_dir checkpoints --augment random_flip_left_right --epochs 250
- Train a new DiscoGAN model:
python3 -m examples.discogan_trainer train --domain_A "path/to/dataset/A/*.png" --domain_B "path/to/dataset/B/*.png" --dataset_size 5000 --batch_size=200 --checkpoints_freq 10 --parallel --samples_freq 10 --samples_dir samples --checkpoints_dir checkpoints --augment random_flip_left_right --epochs 200
- Train a new GcGAN model:
python3 -m examples.gcgan_trainer train --domain_A "path/to/dataset/A/*.png" --domain_B "path/to/dataset/B/*.png" --dataset_size 5000 --batch_size=12 --checkpoints_freq 10 --parallel --samples_freq 10 --samples_dir samples --checkpoints_dir checkpoints --augment random_flip_left_right --epochs 200
The GAN was trained for 250 epochs with Adam optimizer (learning rate: 0.0002, batch size: 16, dataset size: 8,000).
- Datasets:
- Augmented images from CelebA
- flowers102
The GAN was trained for 200 epochs with the same hyper-parameters as recommended in the original paper (dataset size: 20,000). Additionally, one convolution layer with 100 filters was inserted into the generators.
- Datasets: