-> Major winning Convolutional Neural Networks (CNNs), such as VGGNet, ResNet, DenseNet, etc, include tens to hundreds of > millions of parameters, which impose considerable computation and memory overheads. This limits their practical usage in > training and optimizing for real-world applications. On the contrary, light-weight architectures, such as SqueezeNet, are being > proposed to address this issue. However, they mainly suffer from low accuracy, as they have compromised between the processing > power and efficiency. These inefficiencies mostly stem from following an ad-hoc designing procedure. In this work, we discuss > and propose several crucial design principles for an efficient architecture design and elaborate intuitions concerning > different aspects of the design procedure. Furthermore, we introduce a new layer called *SAF-pooling* to improve the > generalization power of the network while keeping it simple by choosing best features. Based on such principles, we propose a > simple architecture called *SimpNet*. We empirically show that *SimpNet* provides a good trade-off between the > computation/memory efficiency and the accuracy solely based on these primitive but crucial principles. SimpNet outperforms the > deeper and more complex architectures such as VGGNet, ResNet, WideResidualNet \etc, on several well-known benchmarks, while > having 2 to 25 times fewer number of parameters and operations. We obtain state-of-the-art results (in terms of a balance > between the accuracy and the number of involved parameters) on standard datasets, such as CIFAR10, CIFAR100, MNIST and SVHN.
0 commit comments