Programatically painting an image following the style of another image using Neural Style Transfer algorithm
This is an implementation of Neural Style Transfer algorithm using Tensorflow. The NST algorithm was created by Leon A. Gatys, Alexander S. Ecker and Matthias Bethge, as described in the famous paper A Neural Algorithm of Artistic Style.
The core idea is to use the filter responses from different layers of a CNN (i.e. the activation values of the layers) of a convolutional network to build the style. Filter responses from different layers (ranging from lower to higher) captures from low level details (strokes, points, corners) to high level details (patterns, objects, etc) is used to modify the content image, i.e. apply the style on content image, and thus "generate" the final "painted" image.
Here's an example:
Content/original image: ("The Milkmaid", by Raja Ravi Varma, 1904)
Style image: ("Self-Portrait with a Straw Hat" by Vincent van Gogh, 1887)
Result/generated image:
Usually Neural Style Transfer algorithm is applied on photograph images, with some famous painting as style, to transform the photograph into a painting-like image (sort of how Prisma app works). But being a lover of Indian and Western art, I wanted to experiment how transforming a painting of one style into another style would look like, like the example above. I have experimented with Jamini Roy's style on Raja Ravi Verma's paintings (two completely distinct styles) and the results were pretty interesting.
- Close this repository in your local system -
git clone https://github.com/SupratimH/neural-style-transfer.gitorgit clone git@github.com:SupratimH/neural-style-transfer.git. - Copy your content and style images into
imagesdirectory. - Make sure both are of exactly same dimensions (dim of 400 x 600 have been used in this code).
- Update the image file names in
content_imageandstyle_imagevariables inart-generation-with-nst.ipynb. - Update the dimensions in
IMAGE_WIDTHandIMAGE_HEIGHTvariables innst_utils.py. - Download the pre-trained VGG-19 model from http://www.vlfeat.org/matconvnet/models/imagenet-vgg-verydeep-19.mat and save into
pretrained-modeldirectory. - Run the notebook.
- Experiment with content and style loss hyperparamters, activation layers from which to extract style and number of epochs.
- Python3
- TensorFlow
- Scipy
- Numpy
- Matplotlib
- Leon A. Gatys, Alexander S. Ecker, Matthias Bethge, (2015). A Neural Algorithm of Artistic Style: https://arxiv.org/abs/1508.06576
- TensorFlow Implementation of Neural Style Painting by Log0: http://www.chioka.in/tensorflow-implementation-neural-algorithm-of-artistic-style
- Karen Simonyan and Andrew Zisserman (2015). Very deep convolutional networks for large-scale image recognition: https://arxiv.org/pdf/1409.1556.pdf
- MatConvNet: http://www.vlfeat.org/matconvnet/pretrained/
- Course on "Convolution Neural Network" by https://www.deeplearning.ai/ on Coursera.


