Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Create export_edge_to_executorch, call export_to_executorch in testin…
…g flow, and call print_ops_info in export_to_executorch (#3863) Summary: Pull Request resolved: #3863 The current setup is unbalanced in the way we call different APIs, especially because we would love to (and should) use the `print_ops_info` function better. Currently, it's not used in the calls most people would want to do on Bento for example. For one-liner compilation (e.g. calling `export_to_executorch`), the information should be printed. `export_to_executorch` should also be called in the testing flow. This diff refactors both the APIs in `__init__.py` and `utils.py`, so that the breakdown makes more sense. Arguably it should be a stack of diffs, but it mostly all goes hand in hand IMO so I kept it as one. Main changes: - create an `export_edge_to_executorch` API, which takes in an `EdgeProgramManager`. This is useful because we want to keep the edge graph module around to pass it in `print_ops_count`, and now we can use it in `export_to_executorch` (see next point) - calls `print_ops_info` in `export_to_executorch`, now that the edge graph is exposed there - call `export_to_executorch` in `run_and_verify`, using the exported module. This required changing the checks for `eval()` mode, see next point. - introduce a `model_is_quantized()` util to call the right API when trying to make models eval. The check on the `GraphModule` type is not robust enough, since other models could be `GraphModule`s but not be quantized. If that's the case, we assert that they have been exported already, which makes the `eval()` requirement moot. Reviewed By: dulinriley, zonglinpengmeta Differential Revision: D58101124 fbshipit-source-id: 9822411c2a832d539c96ab61aff99586da206d01
- Loading branch information