You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Create export_edge_to_executorch, call export_to_executorch in testing flow, and call print_ops_info in export_to_executorch (#3863)
Summary:
Pull Request resolved: #3863
The current setup is unbalanced, especially because we would love to (and should) use the `print_ops_info` function better. Currently, it's not used in the calls most people would want to do on Bento for example.
For one-liner compilation (e.g. calling `export_to_executorch`), the information should be printed.
This diff refactors both the APIs in `__init__.py` and `utils.py`, so that the breakdown makes more sense. Arguably it should be a stack of diffs, but it mostly all goes hand in hand IMO.
Main changes:
- create an `export_edge_to_executorch` API, which takes in an `EdgeProgramManager`. This is useful because we want to keep the edge graph module around to pass it in `print_ops_count`
- calls `print_ops_info` in `export_to_executorch`
- call `export_to_executorch` in `run_and_verify`, using the exported module
- introduce a `model_is_quantized()` util to call the right API when trying to make models eval. The check on the `GraphModule` type is not robust enough, since other models could be `GraphModule`s but not be quantized. If that's the case, we assert that they have been exported already, which makes the `eval()` requirement moot.
Differential Revision: D58101124
0 commit comments