Skip to content

Commit 53b6b78

Browse files
authored
Update README.md (#903)
fixing link that got broken when fpx -> floatx dir name change
1 parent 4865ee6 commit 53b6b78

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

torchao/quantization/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -129,7 +129,7 @@ from torchao.quantization import quantize_, fpx_weight_only
129129
quantize_(model, fpx_weight_only(3, 2))
130130
```
131131

132-
You can find more information [here](../dtypes/fpx/README.md). It should be noted where most other TorchAO apis and benchmarks have focused on applying techniques on top of a bf16 model, performance, fp6 works primarily with the fp16 dtype.
132+
You can find more information [here](../dtypes/floatx/README.md). It should be noted where most other TorchAO apis and benchmarks have focused on applying techniques on top of a bf16 model, performance, fp6 works primarily with the fp16 dtype.
133133

134134
## Affine Quantization Details
135135
Affine quantization refers to the type of quantization that maps from high precision floating point numbers to quantized numbers (low precision integer or floating point dtypes) with an affine transformation, i.e.: `quantized_val = high_preicsion_float_val / scale + zero_point` where `scale` and `zero_point` are quantization parameters for some granularity and based on some data (also some dtypes may not require a `zero_point`). Each of the techniques in the above section qualify as Affine Quantization.

0 commit comments

Comments
 (0)