Skip to content

CapitanuAndreea/Image-Colorization

Repository files navigation

Image Colorization with Conditional GAN

This project implements an image colorization system based on a Conditional Generative Adversarial Network (cGAN).
The goal is to automatically colorize grayscale images by training a model on the Tiny ImageNet dataset.


Model Architecture

  • Generator (ResNet-UNet)

    • Encoder: Pretrained ResNet for feature extraction (frozen for the first epochs, then fine-tuned).
    • Decoder: UNet-style upsampling layers to reconstruct the color channels.
    • Input: Grayscale (L channel from CIELab).
    • Output: Predicted ab color channels.
  • Discriminator (PatchGAN)

    • A CNN that classifies whether image patches are real (from dataset) or fake (generated by the model).
    • Operates on the full RGB image reconstructed from (L + ab).

Training Setup

  • Dataset: Tiny ImageNet (resized to 96×96).
  • Loss functions:
    • Adversarial Loss: Binary Cross-Entropy (BCE), encourages realistic colors.
    • L1 Reconstruction Loss: Measures difference between predicted and ground truth color channels.
    • Final Loss = GAN Loss + 10 × L1 Loss (higher weight on L1 to force the generator to actually colorize).
  • Optimizers: Adam (different learning rates for G and D).
  • Training: First 5 epochs with frozen encoder, then full fine-tuning.

Inference & API

  • Offline testing:
    Use inference_tiny.py to visualize original grayscale images vs generated outputs.

  • FastAPI Service:
    gan_service.py exposes a /colorize/ endpoint.

    • Input: uploaded grayscale image (any resolution, resized internally to 96×96).
    • Output: colorized image (returned as PNG).

About

Automatic image colorization using Conditional GANs (ResNet-UNet + PatchGAN) trained on Tiny ImageNet.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages