[OLD, DO NOT LAND] Support NVFP4 dynamic per tensor scale #3043
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary: This commit adds an option for the existing
NVFP4InferenceConfig
to dynamically compute an appropriate fp32 per tensor scale to support the two level scaling according to the NVFP4 specification:https://developer.nvidia.com/blog/introducing-nvfp4-for-efficient-and-accurate-low-precision-inference/.
While two level scaling is supported in
NVFP4Tensor
, today there is no config API for users to call this. The existingNVFP4InferenceConfig
only supports single level scaling because including an explicitper_tensor_scale
field would make serialization tricky.In the future, we should add an end-to-end calibration flow so users can compute an appropriate per tensor scale for the activations first, and then pass this to
NVFP4Tensor
as a static scale, similar to the proposal in #2572.Test Plan:
Also did a quick benchmark before and after:
On a single B200: