Skip to content
This repository was archived by the owner on Aug 7, 2024. It is now read-only.
This repository was archived by the owner on Aug 7, 2024. It is now read-only.

Speedup sync_float8_amax_and_scale_history #119

Closed
@drisspg

Description

@drisspg

Summary

Once per training iteration we need to sync the amax_history and update scales for all float8 linear module. Currently this is done by iterating over all child modules, finding the linears and syncing. The picture below is the eager performance of calling this on one linear_fp8_module.

Code Pointer:

def sync_float8_amax_and_scale_history(model: torch.nn.Module) -> None:

Screenshot 2023-10-04 at 5 44 57 PM

I also tried compiling just this function to see if this could be reduced.
Screenshot 2023-10-04 at 5 47 41 PM

As well I tried compiling this function with reduce-overhead and the warning skipping cudagraphs due to ['mutated inputs'] was raised

Metadata

Metadata

Assignees

No one assigned

    Labels

    PerfIssues related to perf optimizations

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions