Skip to content

[Feature Request] 4bit and 2bit and 1bit quantization support #14997

Open

Description

Describe the feature request

Support for quantizing and running quantized models in 4bit, 2bit and 1bit. Also saving and loading these models in onnx format for lower file sizes.

The GPU doesn't necessarily have to support 4bit operations since it can just use gpu cores to convert them to float operations or int8 operations when needed.

Describe scenario use case

Some models such as Large Language Models are very big but run fairly well when quantized down to 8bit, 4bit, 2bit or even 1bit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Assignees

No one assigned

    Labels

    feature requestrequest for unsupported feature or enhancementquantizationissues related to quantization

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions