Skip to content

james397520/Pytorch-Quantization-Example

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Pytorch-Quantization-Example

This repository provides an example of Quantization-Aware Training (QAT) using the PyTorch framework, specifically applied to the MNIST dataset. It demonstrates how to prepare, train, and convert a neural network model for efficient deployment on hardware with limited computational resources.

Getting Started

Requirements

Please refer to PyTorch's official installation guide for instructions on installing PyTorch and torchvision.

Installation

Clone this repository to your local machine:

git clone git@github.com:james397520/Pytorch-Quantization-Example.git
cd Pytorch-Quantization-Example

Install the necessary Python packages:

pip install -r requirements.txt

Training mnist Model

Training the Floating-Point Model

To train the floating-point model using the MNIST dataset:

python mnist_float.py

Quantization-Aware Training (QAT)

cd QAT

8-bit QAT example:

python mnist_8bit.py

4-bit QAT example:

python mnist_4bit.py

Test quantized model

python test_quantized_model.py

Contributing

Contributions Welcome! Please open an issue or submit a pull request for any improvements or additions.

About

A pytorch quantization example for mnist.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages