Skip to content

Commit 96753d5

Browse files
authored
Update README.md
1 parent a076995 commit 96753d5

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ This repository contains the architectures, pretrained models, logs, etc pertain
66

77
#### Abstract :
88

9-
> Major winning Convolutional Neural Networks (CNNs), such as VGGNet, ResNet, DenseNet, etc, include tens to hundreds of > millions of parameters, which impose considerable computation and memory overheads. This limits their practical usage in > training and optimizing for real-world applications. On the contrary, light-weight architectures, such as SqueezeNet, are being > proposed to address this issue. However, they mainly suffer from low accuracy, as they have compromised between the processing > power and efficiency. These inefficiencies mostly stem from following an ad-hoc designing procedure. In this work, we discuss > and propose several crucial design principles for an efficient architecture design and elaborate intuitions concerning > different aspects of the design procedure. Furthermore, we introduce a new layer called *SAF-pooling* to improve the > generalization power of the network while keeping it simple by choosing best features. Based on such principles, we propose a > simple architecture called *SimpNet*. We empirically show that *SimpNet* provides a good trade-off between the > computation/memory efficiency and the accuracy solely based on these primitive but crucial principles. SimpNet outperforms the > deeper and more complex architectures such as VGGNet, ResNet, WideResidualNet \etc, on several well-known benchmarks, while > having 2 to 25 times fewer number of parameters and operations. We obtain state-of-the-art results (in terms of a balance > between the accuracy and the number of involved parameters) on standard datasets, such as CIFAR10, CIFAR100, MNIST and SVHN.
9+
> Major winning Convolutional Neural Networks (CNNs), such as VGGNet, ResNet, DenseNet, etc, include tens to hundreds of millions of parameters, which impose considerable computation and memory overheads. This limits their practical usage in training and optimizing for real-world applications. On the contrary, light-weight architectures, such as SqueezeNet, are being proposed to address this issue. However, they mainly suffer from low accuracy, as they have compromised between the processing power and efficiency. These inefficiencies mostly stem from following an ad-hoc designing procedure. In this work, we discuss and propose several crucial design principles for an efficient architecture design and elaborate intuitions concerning different aspects of the design procedure. Furthermore, we introduce a new layer called *SAF-pooling* to improve the generalization power of the network while keeping it simple by choosing best features. Based on such principles, we propose a simple architecture called *SimpNet*. We empirically show that *SimpNet* provides a good trade-off between the computation/memory efficiency and the accuracy solely based on these primitive but crucial principles. SimpNet outperforms the deeper and more complex architectures such as VGGNet, ResNet, WideResidualNet \etc, on several well-known benchmarks, while having 2 to 25 times fewer number of parameters and operations. We obtain state-of-the-art results (in terms of a balance between the accuracy and the number of involved parameters) on standard datasets, such as CIFAR10, CIFAR100, MNIST and SVHN.
1010

1111
Simply put, *SimpNet* architecture is the successor to the the successful SimpleNet architecture. It is based on a series of design principles which allowed the architecture to become superior to its precessor ([*SimpleNet*](https://github.com/Coderx7/SimpleNet)) while still retaining the same number of parameters and simplicity in design and outperforming deeper and more complex architectures (2 to 25X), such as Wide Residual Networks, ResNet, FMax, etc on a series of highly compatative benchmark datasets such as CIFAR10/100, SVHN and MNIST).
1212

0 commit comments

Comments
 (0)