Skip to content

Add support for model quantization. #8205

Open
@binliunls

Description

@binliunls

Is your feature request related to a problem? Please describe.
In order to get a better inference performance, there are ways to quantize a deep learning model to a lower precision model like int8/int4 with an acceptable precsion decrease. Here are some examples:

  1. Pytorch Official
  2. NVIDIA library
  3. ONNX library

Medical images always cost plenty of inference time because of the 3D shape and large size. Since MONAI has already supported the onnx and trt export, it would be better to leverage the quantization feature supported by these formats and get a better latency for the medical image inference. What's more this will benefit the edge and network applications, both of which would be benefit from the low latency.

Describe the solution you'd like
APIs to convert, save, load and deploy quantization models.
Functions to perform the corresponding actions in python scripts.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions