Code and documentation for research and development project with a topic in Deep Neural Network Compression as partial fulfillment in Masters of Autonomous Systems program.
- L1 norm pruning: cloned from https://github.com/Eric-mingjie/rethinking-network-pruning/tree/master/cifar/l1-norm-pruning with minor modifications. Based on the implementation of the paper Pruning Filters For Efficient ConvNets
- Weight level pruning : cloned from https://github.com/Eric-mingjie/rethinking-network-pruning/tree/master/cifar/weight-level with minor modifications. Based on the implementation of the paper Learning both Weights and Connections for Efficient Neural Networks
- Knowledge Distillation methods :
- Cloned from https://github.com/peterliht/knowledge-distillation-pytorch with minor modifications. Based on the implementation of the paper Distilling the Knowledge in a Neural Network
- FitNets implementation. Cloned from https://github.com/AberHu/Knowledge-Distillation-Zoo with modifications. Based on the implementation of the paper FitNets: Hints for Thin Deep Nets
- Low Rank Approximations Caffe: Convolutional Architecture for Fast Feature Embedding
- Quantizations (https://github.com/eladhoffer/convNet.pytorch/blob/master/models/modules/quantize.py)
- Report of research and development project