Code and documentation for research and development project with a topic in Deep Neural Network Compression as partial fulfillmrnt in Masters of Autonomous Systems program.
- L1 norm pruning: cloned from https://github.com/Eric-mingjie/rethinking-network-pruning/tree/master/cifar/l1-norm-pruning with minor modifications. Based on the implementation of the paper "[Pruning Filters For Efficient ConvNets][https://arxiv.org/pdf/1608.08710.pdf]"
- Weight level pruning : cloned from https://github.com/Eric-mingjie/rethinking-network-pruning/tree/master/cifar/l1-norm-pruning with minor modifications. Based on the implementation of the paper "[Pruning Filters For Efficient ConvNets][https://arxiv.org/pdf/1608.08710.pdf]"
- Knowledge Distillation methods :
- Cloned from https://github.com/peterliht/knowledge-distillation-pytorch with minor modifications. Based on the implementation of the paper "[Distilling the Knowledge in a Neural Network][https://arxiv.org/pdf/1503.02531.pdf]"
- FitNets implementation. Cloned from https://github.com/AberHu/Knowledge-Distillation-Zoo with modifications. Based on the implementation of the paper "[FitNets: Hints for Thin Deep Nets][https://arxiv.org/pdf/1412.6550.pdf]"
- Report of research and development project