Skip to content

Code and documentation for my research and development project (A Comparative Study of Sparsity Methods in Deep Neural Network for Faster Inference)

License

Notifications You must be signed in to change notification settings

desinurch/Compression_RnD_HBRS

Repository files navigation

Research and Development Project

A Comparative Study of Sparsity Methods in Deep Neural Network for Faster Inference

Code and documentation for research and development project with a topic in Deep Neural Network Compression as partial fulfillment in Masters of Autonomous Systems program.

Overview

Comparison of compression methods in Deep Learning for image classification task. Comparison is done in terms of speed using the backbone of MLMark benchmark. Compression methods observed are as follows:

Compression Methods

Description

Dataset

Dataset used for comparison is CIFAR-10 to mimic real-life situations.

Model Architecture

Dataset are processed using the network of ResNet-56 and ResNet-110 with pre-activations. In model distillation mode, both of the network act as a teacher which knowledge are transferred to student networks; ResNet-1, ResNet-10, and ResNet-20

Results

Speedup vs Compression

Accuracy vs Speedup

Repository structure:

About

Code and documentation for my research and development project (A Comparative Study of Sparsity Methods in Deep Neural Network for Faster Inference)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published