A TensorFlow implementation of real-time style transfer based on the paper Perceptual Losses for Real-Time Style Transfer and Super-Resolution by Johnson et. al
See my related blog post(link) for an overview of the algorithm for real-time style transfer.
The total loss used is the weighted sum of the style loss, the content loss and a total variation loss. This third component is not specfically mentioned in the original paper but leads to more cohesive images being generated.
- Python 2.7
- TensorFlow
- SciPy & NumPy
- Download the pre-trained VGG network and place it in the top level of the repository (~500MB)
- For training:
- It is recommended to use a GPU to get good results within a reasonable timeframe.
- You will need an image dataset to train your networks. I used the Microsoft COCO dataset and resized the images to 256x256 pixels.
- Generation of styled images can be run on a CPU or GPU. Some pre-trained style networks have been included here.
python run.py --content <content image> --style <style image> --output <output image path>
The algorithm will run with the following settings:
ITERATIONS = 1000 # override with --iterations argument
LEARNING_RATE = 1e1 # override with --learning-rate argument
CONTENT_WEIGHT = 5e1 # override with --content-weight argument
STYLE_WEIGHT = 1e2 # override with --style-weight argument
TV_WEIGHT = 1e2 # override with --tv-weight argument
By default the style transfer will start with a random noise image and optimise it to generate an output image. To start with a particular image (for example the content image) run with the --initial <initial image>
argument.
To run the style transfer with a GPU run with the --use-gpu
flag.
do some stuff here
I have included 3 pre-trained networks for the 3 styles shown in the results section below. They are in the pre-trained-networks folder.
I trained three networks style transfers using the following three style images:
Each network was trained with 80,000 training images taken from the Microsoft COCO dataset and resized to 256×256 pixels. Training was carried out for 100,000 iterations with a batch size of 4 and took approximately 12 hours on a GTX 1080 GPU. the Here are some of the style transfers I was able to generate:
This code was inspired by an existing TensorFlow implementation by Logan Engstrom, and I have re-used most of his transform network code here. The VGG network code is based on an existing implementation by Anish Anish Athalye
Released under GPLv3, see LICENSE.txt