-
Notifications
You must be signed in to change notification settings - Fork 617
Better Shape Function Registration #3237
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This pull request was exported from Phabricator. Differential Revision: D64147797 |
✅ Deploy Preview for pytorch-fbgemm-docs ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
d0c968e
to
5f0bfb1
Compare
This pull request was exported from Phabricator. Differential Revision: D64147797 |
5f0bfb1
to
7f1aca2
Compare
Summary: X-link: facebookresearch/FBGEMM#338 With the increasing use of torch.compile both internally and in open source, I wanted to take some time to think about how to best make our FBGEMM custom ops compile compatible. This diff attempts to implement a testable and scalable way of registering custom shape functions easily. `torch.compile` requires that custom operators have shape functions registered, as they are needed for tracing. In FBGEMM, we have inconsistently registered such shape functions. This diff attempts to clean up and register a good number of commonly used operators. Notably, pytorch allows two methods of registering shape functions for custom ops. In CPP, you can use a Meta function, or in python you can use `register_fake`. It turns out `register_fake` is the recommended and more powerful approach. For example, it is needed for ops that cross devices (such as the car ops) and is needed when exporting a traced graph. This diff thus focuses on the register_fake method and converts a handful of Meta registrations to it. My hope is that this can provide an easily extensible way of registering shape functions for other kernel authors. Reviewed By: jianyuh Differential Revision: D64147797
This pull request was exported from Phabricator. Differential Revision: D64147797 |
Summary: X-link: facebookresearch/FBGEMM#338 With the increasing use of torch.compile both internally and in open source, I wanted to take some time to think about how to best make our FBGEMM custom ops compile compatible. This diff attempts to implement a testable and scalable way of registering custom shape functions easily. `torch.compile` requires that custom operators have shape functions registered, as they are needed for tracing. In FBGEMM, we have inconsistently registered such shape functions. This diff attempts to clean up and register a good number of commonly used operators. Notably, pytorch allows two methods of registering shape functions for custom ops. In CPP, you can use a Meta function, or in python you can use `register_fake`. It turns out `register_fake` is the recommended and more powerful approach. For example, it is needed for ops that cross devices (such as the car ops) and is needed when exporting a traced graph. This diff thus focuses on the register_fake method and converts a handful of Meta registrations to it. My hope is that this can provide an easily extensible way of registering shape functions for other kernel authors. Reviewed By: jianyuh, jiawenliu64 Differential Revision: D64147797
7f1aca2
to
5c48315
Compare
This pull request was exported from Phabricator. Differential Revision: D64147797 |
Summary: X-link: facebookresearch/FBGEMM#338 With the increasing use of torch.compile both internally and in open source, I wanted to take some time to think about how to best make our FBGEMM custom ops compile compatible. This diff attempts to implement a testable and scalable way of registering custom shape functions easily. `torch.compile` requires that custom operators have shape functions registered, as they are needed for tracing. In FBGEMM, we have inconsistently registered such shape functions. This diff attempts to clean up and register a good number of commonly used operators. Notably, pytorch allows two methods of registering shape functions for custom ops. In CPP, you can use a Meta function, or in python you can use `register_fake`. It turns out `register_fake` is the recommended and more powerful approach. For example, it is needed for ops that cross devices (such as the car ops) and is needed when exporting a traced graph. This diff thus focuses on the register_fake method and converts a handful of Meta registrations to it. My hope is that this can provide an easily extensible way of registering shape functions for other kernel authors. Reviewed By: jianyuh, jiawenliu64 Differential Revision: D64147797
5c48315
to
d0a6304
Compare
This pull request was exported from Phabricator. Differential Revision: D64147797 |
Summary: X-link: facebookresearch/FBGEMM#338 With the increasing use of torch.compile both internally and in open source, I wanted to take some time to think about how to best make our FBGEMM custom ops compile compatible. This diff attempts to implement a testable and scalable way of registering custom shape functions easily. `torch.compile` requires that custom operators have shape functions registered, as they are needed for tracing. In FBGEMM, we have inconsistently registered such shape functions. This diff attempts to clean up and register a good number of commonly used operators. Notably, pytorch allows two methods of registering shape functions for custom ops. In CPP, you can use a Meta function, or in python you can use `register_fake`. It turns out `register_fake` is the recommended and more powerful approach. For example, it is needed for ops that cross devices (such as the car ops) and is needed when exporting a traced graph. This diff thus focuses on the register_fake method and converts a handful of Meta registrations to it. My hope is that this can provide an easily extensible way of registering shape functions for other kernel authors. Reviewed By: jianyuh, jiawenliu64 Differential Revision: D64147797
d0a6304
to
d0dd9b8
Compare
This pull request was exported from Phabricator. Differential Revision: D64147797 |
Summary: X-link: facebookresearch/FBGEMM#338 With the increasing use of torch.compile both internally and in open source, I wanted to take some time to think about how to best make our FBGEMM custom ops compile compatible. This diff attempts to implement a testable and scalable way of registering custom shape functions easily. `torch.compile` requires that custom operators have shape functions registered, as they are needed for tracing. In FBGEMM, we have inconsistently registered such shape functions. This diff attempts to clean up and register a good number of commonly used operators. Notably, pytorch allows two methods of registering shape functions for custom ops. In CPP, you can use a Meta function, or in python you can use `register_fake`. It turns out `register_fake` is the recommended and more powerful approach. For example, it is needed for ops that cross devices (such as the car ops) and is needed when exporting a traced graph. This diff thus focuses on the register_fake method and converts a handful of Meta registrations to it. My hope is that this can provide an easily extensible way of registering shape functions for other kernel authors. Reviewed By: jianyuh, jiawenliu64 Differential Revision: D64147797
d0dd9b8
to
237c0a4
Compare
This pull request was exported from Phabricator. Differential Revision: D64147797 |
Summary: X-link: facebookresearch/FBGEMM#338 With the increasing use of torch.compile both internally and in open source, I wanted to take some time to think about how to best make our FBGEMM custom ops compile compatible. This diff attempts to implement a testable and scalable way of registering custom shape functions easily. `torch.compile` requires that custom operators have shape functions registered, as they are needed for tracing. In FBGEMM, we have inconsistently registered such shape functions. This diff attempts to clean up and register a good number of commonly used operators. Notably, pytorch allows two methods of registering shape functions for custom ops. In CPP, you can use a Meta function, or in python you can use `register_fake`. It turns out `register_fake` is the recommended and more powerful approach. For example, it is needed for ops that cross devices (such as the car ops) and is needed when exporting a traced graph. This diff thus focuses on the register_fake method and converts a handful of Meta registrations to it. My hope is that this can provide an easily extensible way of registering shape functions for other kernel authors. Reviewed By: jianyuh, jiawenliu64 Differential Revision: D64147797
237c0a4
to
bc3254c
Compare
This pull request was exported from Phabricator. Differential Revision: D64147797 |
This pull request has been merged in 4ba523c. |
Summary: X-link: pytorch#3237 Pull Request resolved: facebookresearch/FBGEMM#338 With the increasing use of torch.compile both internally and in open source, I wanted to take some time to think about how to best make our FBGEMM custom ops compile compatible. This diff attempts to implement a testable and scalable way of registering custom shape functions easily. `torch.compile` requires that custom operators have shape functions registered, as they are needed for tracing. In FBGEMM, we have inconsistently registered such shape functions. This diff attempts to clean up and register a good number of commonly used operators. Notably, pytorch allows two methods of registering shape functions for custom ops. In CPP, you can use a Meta function, or in python you can use `register_fake`. It turns out `register_fake` is the recommended and more powerful approach. For example, it is needed for ops that cross devices (such as the car ops) and is needed when exporting a traced graph. This diff thus focuses on the register_fake method and converts a handful of Meta registrations to it. My hope is that this can provide an easily extensible way of registering shape functions for other kernel authors. Reviewed By: jianyuh, jiawenliu64 Differential Revision: D64147797 fbshipit-source-id: 73b25394b3d65730ddb5cb1614536c007f918638
Summary:
torch.compile
requires that custom operators have shape functions registered, as they are needed for tracing.In FBGEMM, we have inconsistently registered such shape functions. This diff attempts to clean up and register a good number of commonly used operators.
Notably, pytorch allows two methods of registering shape functions for custom ops. In CPP, you can use a Meta function, or in python you can use
register_fake
. It turns outregister_fake
is the recommended and more powerful approach. For example, it is needed for ops that cross devices (such as the car ops) and is needed when exporting a traced graph.This diff thus focuses on the register_fake method and converts a handful of Meta registrations to it. My hope is that this can provide an easily extensible way of registering shape functions for other kernel authors.
Differential Revision: D64147797