Open
Description
Describe the feature request
Support for quantizing and running quantized models in 4bit, 2bit and 1bit. Also saving and loading these models in onnx format for lower file sizes.
The GPU doesn't necessarily have to support 4bit operations since it can just use gpu cores to convert them to float operations or int8 operations when needed.
Describe scenario use case
Some models such as Large Language Models are very big but run fairly well when quantized down to 8bit, 4bit, 2bit or even 1bit.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment