Skip to content

A modular deep learning evaluation framework for benchmarking multiple CNN architectures across varied optimization strategies and training configurations. Built for scalable experimentation and transferability to real-world image classification tasks.

Notifications You must be signed in to change notification settings

SameetAsadullah/cnn-benchmark-suite

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

3 Commits
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ” Vision Model Evaluation Framework (PyTorch)

A scalable framework for benchmarking deep convolutional architectures on classification tasks. This project evaluates the impact of optimizers, learning rates, and batch sizes across multiple CNN backbones β€” providing a reproducible experimental setup aligned with research and production best practices.


πŸš€ Key Highlights

  • Plug-and-play support for multiple CNN architectures
  • Scalable benchmarking with grid search over:
    • Optimizers: SGD, Adam
    • Learning Rates: 0.01, 0.1
    • Batch Sizes: 32, 64
  • Data augmentation, normalization, and stratified validation split
  • Detailed metrics logging + real-time visualization support
  • Model weights saved automatically for top-performing configs

🧠 Architectures Supported

  • Custom Lightweight CNN (baseline)
  • ResNet-18
  • MobileNetV2
  • GoogleNet
  • AlexNet (included for completeness; not recommended for production)

πŸ›‘οΈ Modular design allows easy plug-in of ViT, EfficientNet, ConvNext, etc.


πŸ“Š Experiments & Logging

  • Validation and test performance tracked across all combinations
  • Key metrics:
    • Train & Val Accuracy/Loss (per epoch)
    • Best Validation Accuracy (per config)
    • Final Test Accuracy (per model)
  • Automatic selection of best model per architecture
  • Visual analytics:
    • Accuracy trends
    • Impact of learning rate
    • Batch size comparison
    • Model vs Optimizer performance

πŸ” Use Cases

  • Model architecture benchmarking
  • Optimizer sensitivity studies
  • Lightweight deployment model search
  • Academic reproducibility experiments

About

A modular deep learning evaluation framework for benchmarking multiple CNN architectures across varied optimization strategies and training configurations. Built for scalable experimentation and transferability to real-world image classification tasks.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published