Skip to content

Fix wrong scale eps applied #1770

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
60 changes: 60 additions & 0 deletions test/quantization/test_quant_primitives.py
Original file line number Diff line number Diff line change
Expand Up @@ -961,6 +961,66 @@ def test_float8_quant_primitives(self, hp_dtype, float8_dtype):
torch.testing.assert_close(expected_quantized, quantized)
torch.testing.assert_close(expected_dequantized, dequantized)

@parameterized.expand(
[
torch.float64,
torch.float32,
torch.bfloat16,
torch.float16,
]
)
def test_choose_qparams_affine_for_inf_scale_reciprocal(self, hp_dtype):
# Fixed by #1770, the test will fail for all the variants
# before that fix, and will pass afterwards.
#
# The scale value must be forcefully clamped, within
# _choose_qparams_affine() function, (that
# choose_qparams_affine() and others call into) to a large
# enough number so that its reciprocal does not become Inf.
# Otherwise during the quantization, by multiplying with scale
# reciprocal, all the values will be quantized to Inf value,
# except from zero value that would produce NaN (0*Inf) as
# quantized value.
#
# The minimal normalized value for given floating point data
# type is given by torch.finfo(hp_dtype).tiny - let's call
# this value "tiny". It could be seen by checking, that for
# all of torch.float64, torch.float32, torch.float16 and
# torch.floatb16, denormalized number that is equal to tiny/4
# will produce Inf as its reciprocal.
#
# Thus, to reproduce the problem, one would create a tensor
# with such values that their absolute maximum, after being
# divided with the range of quantized data (that is 57344 for
# torch.float8_e5m2), would produce scale smaller than tiny/4.
# Also, eps parameter should be set to value no greater than
# tiny/4, as scale is clamped from below to that value. With
# such inputs, choose_qparams_affine() will produce Inf as
# scale value.
#
# Note that this may seem as contrieved reproducer. However,
# there are cases with existing code that would pass
# torch.finfo(torch.float32).eps as eps value, no matters of
# scale_dtype. The float16 has rather small range, so this
# value is well bellow torch.finfo(torch.float32).eps, and for
# such eps value, the code bellow would produce Inf scale even
# for float16 tensor that has 0.5 as its maximum value.
float8_dtype = torch.float8_e5m2
tiny = torch.finfo(hp_dtype).tiny
x = torch.tensor([[0, 100 * tiny]], dtype=hp_dtype)
scale, _ = choose_qparams_affine(
input=x,
mapping_type=MappingType.SYMMETRIC,
block_size=[1, 2],
target_dtype=float8_dtype,
eps=tiny / 4,
scale_dtype=hp_dtype,
preserve_zero=True,
zero_point_domain=ZeroPointDomain.NONE,
)
scale_reciprocal = scale.reciprocal()
assert not torch.any(torch.isinf(scale_reciprocal)).item()


if __name__ == "__main__":
unittest.main()
19 changes: 18 additions & 1 deletion torchao/quantization/quant_primitives.py
Original file line number Diff line number Diff line change
Expand Up @@ -862,6 +862,7 @@ def _choose_qparams_affine(
3. calculate quantization parameters based on min_val/max_val based on args like `preserve_zero`
and `zero_point_domain`
"""

quant_min, quant_max = _get_and_check_qmin_qmax(target_dtype, quant_min, quant_max)
assert mapping_type in [
MappingType.SYMMETRIC.name,
Expand Down Expand Up @@ -909,6 +910,16 @@ def _choose_qparams_affine(
min_val_neg = min_val
max_val_pos = max_val

# Prevent reciprocal of scale, calculated below, to become Inf.
if torch.is_floating_point(max_val):
# In this case, scale will be calculated below in
# max_val.dtype.
eps = max(eps, torch.finfo(max_val.dtype).tiny)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where is eps used? I see that on L984 we are calculating eps again, just wondering if we need both calculations?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is used in lines 957 and 961 (line numbers are after changes by this PR), to clamp scale in order to prevent 1/scale to become Inf. In second case, this is immediately necessary, as dividing by scale is already performed in line 966. Furthermore, towards the end of the function, in line 980, scale may be converted to different data type from what is used up to this point, which could trigger scale going into subnormal range for this final data type, so it's necessary to clamp again from below (and, while we're at it, I'm clamping it from above too).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, so eps is read as an argument. In that case, it's a bit confusing to silently override eps here. Is there a way to set it correctly at the callsite (using the logic you have here) so the setting is honored in this function?

Overall logic of how to choose eps looks great, now I'm just trying to help fit this in cleanly :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand - it is not nice that the argument may get changed silently, but there is a number of call sites, so it seems to me for the maintenance etc., the best place to fix it is here, at single place. If silent change is considered very intrusive, maybe I can add a printout in case when change needed? Overall: I think I mentioned it elsewhere that the best way to fix it may be to drop eps from arguments list, but it's already in a public interface...

Copy link
Contributor

@vkuzo vkuzo Mar 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in a codepath such as quant_primitives.py which is supposed to be the canonical way these calculations are done, IMO silently modifying a passed-in argument is not something that should be landed

it's understandable if you don't want to sign up for changing every callsite - in that case a good way to wrap this up could be

  1. create a test case which fails
  2. skip the test case
  3. create an issue asking to fix it properly and unskip the test case

then someone else could pick up the fix

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the definition of "visible to user"? Every function that I mentioned above is visible to user in the sense it is possible for user to import the module and use the function - and all of them expose eps as argument, so it's plain impossible to find all call sites, let alone fix them. On the other hand, if we say quantize_ is the only API visible to users, then I believe the eps is not visible to users at all (not 100% sure - maybe there is a config exposing it), which means eps argument is considered being used by torchao internally only, and as apparently it doesn't get used the right way, the best fix is just to remove it as an argument everywhere, and keep the check added by this PR.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can treat "visible to the user" as "in torchao repository".

it's plain impossible to find all call sites, let alone fix them.

it's definitely possible to fix this for callsites inside of torchao

Copy link
Collaborator Author

@alexsamardzic alexsamardzic Mar 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please understand that I'm not into nit-picking. However, here we have plain simple case of handling an invalid argument value, and we're going into great lengths about what is in a nutshell an aesthetics argument.

The invalid eps value could be silently changed (and I think it's the best idea, as this change does "the right thing", i.e. makes it possible to do the quantization later, keeping the quantized range as big as possible), with or without printing a warning to the user. Alternatively, we could throw an exception. Or, we could decide that what is this all about is just a contrived corner case that most likely won't happen in practice, so we change nothing. With any of these, we're resolving this issue once and for all. On the other hand, I really don't see if I for example make the fix here (the reproducer is below), how is that going to prevent another torchao developer down the road from making this same omission when writing alike handler for a new config? So it's not that I'm lazy or whatever to do what you're suggesting, it's simply that I don't want to do that.

Script to reproduce issue in case of integer quantization - the problem manifests itself here with the too coarse quantization, with the fix in this PR quantization is much better
import torch
import torchao

from torchao.dtypes import to_affine_quantized_intx
from torchao.quantization import (
    Int8WeightOnlyConfig,
    MappingType,
    quantize_,
)

dtype = torch.float16
tiny = torch.finfo(dtype).tiny

model = torch.nn.Linear(1, 4, dtype=dtype)
model.weight = torch.nn.Parameter(
    torch.tensor([[0, 10 * tiny, 20 * tiny, 30 * tiny]], dtype=dtype)
)
print(model.weight)

quantize_(model, Int8WeightOnlyConfig())
print(model.weight)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TBH, I'd agree with Vasiliy, that we can do assert for args, but not change args in the function. unless eps is not provided by user, then we can set a reasonable default

we are planning on some refactors and dropping the preserve_zero and zero_point_domain args for the quant primitive ops, cc @jainapurva will work on this soon, we could drop eps as well if it makes sense, or we can set it to None majority of times.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TBH, I'd agree with Vasiliy, that we can do assert for args, but not change args in the function. unless eps is not provided by user, then we can set a reasonable default

I have no issues with doing assert instead of changing arguments. I've updated the PR with this approach.

else:
# In this case, scale will be calculated below in
# torch.float32 dtype.
eps = max(eps, torch.finfo(torch.float32).tiny)

if (
mapping_type == MappingType.SYMMETRIC.name
or mapping_type == MappingType.SYMMETRIC_NO_CLIPPING_ERR.name
Expand Down Expand Up @@ -969,7 +980,13 @@ def _choose_qparams_affine(

if zero_point is not None:
zero_point = zero_point.to(dtype=zero_point_dtype)
return scale.to(dtype=scale_dtype), zero_point
scale = scale.to(dtype=scale_dtype)
if torch.is_floating_point(scale):
# Again, prevent scale reciprocal to become Inf.
scale = scale.clamp(
min=torch.finfo(scale_dtype).tiny, max=torch.finfo(scale_dtype).max
)
return scale, zero_point


def choose_qparams_and_quantize_affine_qqq(
Expand Down
Loading