Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Don't call dequantize in
__repr__
(pytorch#965)
Summary: To reduce the need for `dequantize` to work when people add new things, if people want to see the dequantized value, they can print `weight.dequantize()` instead of relying on `__repr__` Test Plan: test locally: ``` from torchao import quantize_, int8_weight_only import torch l = torch.nn.Linear(2, 2) quantize_(l, int8_weight_only()) print(l) print(l.dequantize()) ``` ``` AffineQuantizedTensor(layout_tensor=PlainAQTLayout(data=tensor([[ 127, -77], [-128, -40]], dtype=torch.int8)... , scale=tensor([0.0007, 0.0032])... , zero_point=tensor([0, 0])... , layout_type=PlainLayoutType()), block_size=(1, 2), shape=torch.Size([2, 2]), device=cpu, dtype=torch.float32, requires_grad=False) tensor([[ 0.0856, -0.0519], [-0.4070, -0.1272]]) ``` Reviewers: Subscribers: Tasks: Tags:
- Loading branch information