Skip to content

Latest commit

 

History

History
32 lines (24 loc) · 2.5 KB

README.md

File metadata and controls

32 lines (24 loc) · 2.5 KB

Research and Development Project

A Comparative Study of Sparsity Methods in Deep Neural Network for Faster Inference

Code and documentation for research and development project with a topic in Deep Neural Network Compression as partial fulfillment in Masters of Autonomous Systems program.

Overview

Comparison of compression methods in Deep Learning for image classification task. Comparison is done in terms of speed using the backbone of MLMark benchmark. Compression methods observed are as follows:

Compression Methods

Description

Dataset

Dataset used for comparison is CIFAR-10 to mimic real-life situations.

Model Architecture

Dataset are processed using the network of ResNet-56 and ResNet-110 with pre-activations. In model distillation mode, both of the network act as a teacher which knowledge are transferred to student networks; ResNet-1, ResNet-10, and ResNet-20

Results

Speedup vs Compression

Accuracy vs Speedup

Repository structure: