Skip to content
This repository was archived by the owner on Aug 7, 2024. It is now read-only.

Commit 0ec7ada

Browse files
committed
Update on "Add rowwise scaling to Float8Inference module"
# Summary # Performance - Need to investigate the Rowwise dynamic case, I would think this should be faster than TensorWise dynamic ```Shell Benchmark Results: +--------------------------+-------------+ | Variant | Time (μs) | +==========================+=============+ | BF16 | 2540.56 | +--------------------------+-------------+ | FP8 Dynamic | 1512.96 | +--------------------------+-------------+ | FP8 Static | 1363.75 | +--------------------------+-------------+ | FP8 Weight Only | 2774.22 | +--------------------------+-------------+ | FP8 Dynamic AxisWise | 1510.82 | +--------------------------+-------------+ | FP8 Static AxisWise | 1438.92 | +--------------------------+-------------+ | FP8 Weight Only AxisWise | 2762.88 | +--------------------------+-------------+ Comparison Results: +--------------------------+-------------+-------------------+---------------+ | Variant | Time (μs) | Speedup vs BF16 | MAE vs BF16 | +==========================+=============+===================+===============+ | BF16 | 2540.56 | 1.00x | 0 | +--------------------------+-------------+-------------------+---------------+ | FP8 Dynamic | 1512.96 | 1.68x | 0.00543213 | +--------------------------+-------------+-------------------+---------------+ | FP8 Static | 1363.75 | 1.86x | 0.00546265 | +--------------------------+-------------+-------------------+---------------+ | FP8 Weight Only | 2774.22 | 0.92x | 0.00379944 | +--------------------------+-------------+-------------------+---------------+ | FP8 Dynamic AxisWise | 1510.82 | 1.68x | 0.00543213 | +--------------------------+-------------+-------------------+---------------+ | FP8 Static AxisWise | 1438.92 | 1.77x | 0.00546265 | +--------------------------+-------------+-------------------+---------------+ | FP8 Weight Only AxisWise | 2762.88 | 0.92x | 0.00379944 | +--------------------------+-------------+-------------------+---------------+ ``` ### Numerics Using this pytorch/ao#446 TensorWise Dynamic scaling: ``` Shell +------------+--------------------------------------------+ | Task | Metrics | +============+============================================+ | winogrande | +-----------------+----------+ | | | | acc,none | 0.735596 | | | | +-----------------+----------+ | | | | acc_stderr,none | 0.012395 | | | | +-----------------+----------+ | +------------+--------------------------------------------+ | wikitext | +-----------------------------+----------+ | | | | bits_per_byte,none | 0.538637 | | | | +-----------------------------+----------+ | | | | bits_per_byte_stderr,none | N/A | | | | +-----------------------------+----------+ | | | | byte_perplexity,none | 1.452600 | | | | +-----------------------------+----------+ | | | | byte_perplexity_stderr,none | N/A | | | | +-----------------------------+----------+ | | | | word_perplexity,none | 7.363215 | | | | +-----------------------------+----------+ | | | | word_perplexity_stderr,none | N/A | | | | +-----------------------------+----------+ | +------------+--------------------------------------------+ ``` AxisWise Dynamic Scaling ``` Shell +------------+--------------------------------------------+ | Task | Metrics | +============+============================================+ | winogrande | +-----------------+----------+ | | | | acc,none | 0.735596 | | | | +-----------------+----------+ | | | | acc_stderr,none | 0.012395 | | | | +-----------------+----------+ | +------------+--------------------------------------------+ | wikitext | +-----------------------------+----------+ | | | | bits_per_byte,none | 0.538637 | | | | +-----------------------------+----------+ | | | | bits_per_byte_stderr,none | N/A | | | | +-----------------------------+----------+ | | | | byte_perplexity,none | 1.452600 | | | | +-----------------------------+----------+ | | | | byte_perplexity_stderr,none | N/A | | | | +-----------------------------+----------+ | | | | word_perplexity,none | 7.363215 | | | | +-----------------------------+----------+ | | | | word_perplexity_stderr,none | N/A | | | | +-----------------------------+----------+ | +------------+--------------------------------------------+ ``` [ghstack-poisoned]
1 parent 39e0ad1 commit 0ec7ada

File tree

1 file changed

+3
-0
lines changed

1 file changed

+3
-0
lines changed

float8_experimental/inference.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -140,6 +140,9 @@ def forward(self, input: torch.Tensor) -> torch.Tensor:
140140
*original_m, -1
141141
)
142142

143+
def extra_repr(self):
144+
return f"{super().extra_repr()},activation_casting={self.activation_casting.name},scaling_granularity={self.scaling_granularity.name}"
145+
143146
# Builder functions for Float8LinearInference
144147
def quantize_weight(self, dtype: torch.dtype = e4m3_dtype) -> None:
145148
"""This functions converts the weight to a Float8Tensor and sets its requires_grad to False.

0 commit comments

Comments
 (0)