This repository contains an implementation of a CycleGAN model from scratch in PyTorch. The model was trained on the Van Gogh to image dataset for 44 epochs, effectively transferring Van Gogh's artistic style to real-world images and vice versa.
CycleGAN is a generative adversarial network that learns to translate images from one domain to another without paired examples. This project demonstrates the successful translation between Van Gogh's style and another image domain.
Figure: Real-world images translated to Van Gogh's artistic style.
Figure: Images in Van Gogh's style translated back to realistic images.
Clone the repository:
git clone https://github.com/Mo-Ouail-Ocf/CycleGAN-StyleTransfer
cd CycleGAN-StyleTransfer
Set up the conda environment:
conda env create -f env.yml
conda activate cycle_gan_env
To train the CycleGAN model, simply run the train.py
script:
python train.py
The training dynamics, including loss curves and other metrics, are logged in the cycle_gan_log
directory. To visualize the logs, launch TensorBoard:
tensorboard --logdir cycle_gan_log
The model was trained on the Van Gogh to image dataset. You can download the dataset from this link.
The CycleGAN model consists of two generator networks and two discriminator networks, following the architecture described in the original paper. The generators handle the image translation between domains, while the discriminators aim to distinguish between real and generated images.
This project was inspired by the CycleGAN paper and implemented entirely from scratch in PyTorch.