-
Notifications
You must be signed in to change notification settings - Fork 350
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
skip dummy inference and run_shape_analysis #3212
Conversation
Here's what I think could be a simpler way of doing this
Here the node corresponding to trt_module_node = [node for node in gm.graph.nodes if node.name == "_run_on_acc0"]
trt_module_node.meta["val"] - This should already have fake tensors which need to be used in the exporter.
trt_node.meta["val"] = trt_module_node.meta["val"]
Replacing the dummy inference will also need changes to our converter test suite. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Besides checking val
field in node.meta, also check tensor_meta
if val
is not present. Apparently, torch.compile stores metadata in tensor_meta
while export uses val
fields in node.meta
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added minor comments. Mostly LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Description
There is two changes introduced in this PR:
during the compile stage:
skipped dummy inference and use graph inspection instead to get the output_node.meta['val']
during the save stage:
skipped run_shape_analysis and use graph inspection instead to get the output_node.meta['val']
Fixes # (issue)
Type of change
Please delete options that are not relevant and/or add your own.
Checklist: