A scalable framework for benchmarking deep convolutional architectures on classification tasks. This project evaluates the impact of optimizers, learning rates, and batch sizes across multiple CNN backbones β providing a reproducible experimental setup aligned with research and production best practices.
- Plug-and-play support for multiple CNN architectures
- Scalable benchmarking with grid search over:
- Optimizers:
SGD,Adam - Learning Rates:
0.01,0.1 - Batch Sizes:
32,64
- Optimizers:
- Data augmentation, normalization, and stratified validation split
- Detailed metrics logging + real-time visualization support
- Model weights saved automatically for top-performing configs
- Custom Lightweight CNN (baseline)
- ResNet-18
- MobileNetV2
- GoogleNet
- AlexNet (included for completeness; not recommended for production)
π‘οΈ Modular design allows easy plug-in of ViT, EfficientNet, ConvNext, etc.
- Validation and test performance tracked across all combinations
- Key metrics:
- Train & Val Accuracy/Loss (per epoch)
- Best Validation Accuracy (per config)
- Final Test Accuracy (per model)
- Automatic selection of best model per architecture
- Visual analytics:
- Accuracy trends
- Impact of learning rate
- Batch size comparison
- Model vs Optimizer performance
- Model architecture benchmarking
- Optimizer sensitivity studies
- Lightweight deployment model search
- Academic reproducibility experiments