This repository was archived by the owner on Aug 7, 2024. It is now read-only.
This repository was archived by the owner on Aug 7, 2024. It is now read-only.
Top Level Torch Compile Issue Tracker #195
Closed
Description
Summary
This issue is used to track multiple issues encountered with torch.compile for Float8Tensor
Float8Experimental
Sub-Issues/Pull Requests
- PR to add subclass at graph_boundary: Add tests for Float8Tensor at graph boundaries #196
- ISSUE: Use Activation Hooks failing with AotAutograd for dynamic linear #223
- PR: The min cut partitioner does not want to recmpute weight cast: Enable fp8_weight recomputation during backwards pass #185
PyTorch Core
Sub-Issues/Pull Requests
- ISSUE: request: torch.compile support for tensor subclass subgraph boundaries pytorch/pytorch#117115
- ISSUE: Min Cut Partitioner Issue with float8_experimental pytorch/pytorch#117901
- ISSUE: dynamo <> autograd.Function assumes that grad_outputs have the same subclass-ness as forward outputs pytorch/pytorch#117662
- ISSUE: torch.compile + __torch_dispatch__ subclasses: support custom attribute accesses on intermediates pytorch/pytorch#117596
- ISSUE: dynamo + autograd.Function: dynamo doesn't model multiple
ctx.save_for_backward
calls. pytorch/pytorch#117652 (not blocking, I forgot to push my changes to the PR) - PR: dynamo: support attribute access on tensor subclasses without sources pytorch/pytorch#117666
- PR: dynamo: respect autograd.Function + multiple save_for_backward calls pytorch/pytorch#117667
- ISSUE: Dynamo codegen for mapping outputs back to user variables should contain safety checks (no fake tensors, no variable trackers) pytorch/pytorch#118211