This project is a practical implementation of Neural Style Transfer (NST) using a pre-trained model from TensorFlow Hub. The idea is simple: take a photo (content) and a work of art (style), and blend them together to create something unique.
The goal of this project is to explore Generative AI by recreating a simplified version of Neural Style Transfer. We use a Fast NST model (feed-forward), apply it to custom images, and analyze the results in depth.
The final deliverable is a single .html file, including code, visual tests, and commentary.
This project is based on the public GitHub repo by Deepesh D M, who built a complete NST demo with API and Streamlit.
We simplified his approach to focus on a clean, readable and reproducible pipeline.
- Used the pre-trained model
magenta/arbitrary-image-stylization-v1-256 - Tested our own portraits (neutral background, front-facing)
- Used a variety of artistic styles: Hockney, Ghibli, Van Gogh, Ukiyo-e, Anime...
- Analyzed what works and what doesn't (texture, contrast, clarity)
- Optimized the pipeline for Colab & low RAM (resizing, batch processing)
- Documented all tests and results in a single
.htmlreport
This project is organized in a modular way:
api/contains the core logic for style transfer, encapsulated intransfer_style.py, and made importable with__init__.py.assets/stores all input images used for content and style (e.g. portraits, artworks).model/is where the pre-trained TensorFlow Hub model is downloaded and extracted.run_local.pyallows you to test the NST pipeline locally with any chosen images.requirements.txtlists the Python dependencies for local installation.README.mdcontains the full project description, instructions, and usage guide.
This structure makes the project clean, testable, and easy to extend (e.g. for future web deployment).
git clone https://github.com/prowang01/nst-project.git
cd nst-projectIt’s recommended to use a virtual environment for clean dependency management:
python -m venv venv
venv\Scripts\activatepip install -r requirements.txtWe use a TensorFlow Hub model for fast style transfer. Download and extract it into the model/ directory:
wget "https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/2?tf-hub-format=compressed" -O model.tar.gz
mkdir model
tar -xvzf model.tar.gz -C modelPlace your content and style images in the assets/ folder. You can use .jpg or .png files.
Edit the image paths in run_local.py if needed, then run:
python run_local.pyThis will generate a stylized image by applying the chosen artistic style to your content image and display it using matplotlib.
We’ve included a Streamlit interface for an interactive experience.
You can upload your own images and apply style transfer directly in your browser!
Make sure you’ve installed the required packages (including Streamlit):
pip install -r requirements.txt
streamlit run app.pyYour default browser will open automatically at http://localhost:8501, showing the interface.
What you can do :
-
Upload a content image (.jpg)
-
Upload a style image (.jpg)
-
Click to generate your stylized result
-
View and download the result
Images are resized automatically for performance and saved temporarily in the assets/ folder.