Skip to content

Commit

Permalink
Update README and adding images
Browse files Browse the repository at this point in the history
  • Loading branch information
desinurch committed Jul 27, 2020
1 parent ff48dba commit 310cbd2
Show file tree
Hide file tree
Showing 145 changed files with 21 additions and 20,459 deletions.
Binary file not shown.
24 changes: 21 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,25 @@
# Research and Development Project HBRS
# Research and Development Project
## A Comparative Study of Sparsity Methods in Deep Neural Network for Faster Inference

Code and documentation for research and development project with a topic in Deep Neural Network Compression as partial fulfillment in Masters of Autonomous Systems program.
Code and documentation for research and development project with a topic in Deep Neural Network Compression as partial fulfillment in Masters of Autonomous Systems program.

## Overview

Comparison of compression methods in Deep Learning for image classification task. Comparison is done in terms of speed using the backbone of [MLMark benchmark](https://www.eembc.org/mlmark/). Compression methods observed are as follows:

![Compression Methods](/imgs/methods-compression.png)

## Description
### Dataset
Dataset used for comparison is [CIFAR-10](https://www.cs.toronto.edu/~kriz/cifar.html) to mimic real-life situations.

### Model Architecture
Dataset are processed using the network of ResNet-56 and ResNet-110 with pre-activations. In model distillation mode, both of the network act as a teacher which knowledge are transferred to student networks; ResNet-1, ResNet-10, and ResNet-20

## Results
![Speedup vs Compression](/imgs/speedup_vs_compression.png)
---------------------------------------------------------------------------
![Accuracy vs Speedup](/imgs/accuracyLoss_vs_sppedUp.png)

### Repository structure:
- L1 norm pruning: cloned from https://github.com/Eric-mingjie/rethinking-network-pruning/tree/master/cifar/l1-norm-pruning with minor modifications. Based on the implementation of the paper [Pruning Filters For Efficient ConvNets](https://arxiv.org/pdf/1608.08710.pdf)
Expand All @@ -11,4 +29,4 @@ Code and documentation for research and development project with a topic in Deep
- FitNets implementation. Cloned from https://github.com/AberHu/Knowledge-Distillation-Zoo with modifications. Based on the implementation of the paper [FitNets: Hints for Thin Deep Nets](https://arxiv.org/pdf/1412.6550.pdf)
- Low Rank Approximations [Caffe: Convolutional Architecture for Fast Feature Embedding](https://arxiv.org/abs/1408.5093)
- Quantizations (https://github.com/eladhoffer/convNet.pytorch/blob/master/models/modules/quantize.py)
- Report of research and development project
- Results presentations
Binary file added imgs/accuracyLoss_vs_sppedUp.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added imgs/methods-compression.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added imgs/speedup_vs_compression.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
268 changes: 0 additions & 268 deletions src/FitNets/train_at.py

This file was deleted.

Loading

0 comments on commit 310cbd2

Please sign in to comment.