-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix PyTorch matmul conversion when given (2-dim, N-dim) input pair #7845
Conversation
Update repo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. Let's add some test on tests/python/frontend/pytorch
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall LGTM. Mostly nits. Could we have a unit test to illustrate when we expect it to be a dense and others to be batch_matmul?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Comments are nits and feel free to ignore if no other comments need to address.
@@ -162,7 +162,7 @@ def measure_latency(model, input_shapes, output_shapes, thresh, dryruns=40): | |||
return est | |||
|
|||
|
|||
def verify_model(model_name, input_data=[], custom_convert_map={}, rtol=1e-5, atol=1e-5): | |||
def verify_model(model_name, input_data=[], custom_convert_map={}, rtol=1e-5, atol=1e-5, expected_ops=[]): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good addition!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please resolve the rebase conflict.
@wweic it seems no conflict to the main branch? Please feel free to merge it. |
…pache#7845) * [AutoScheduler] Fix incorrectly array context device and hide info at the beginning * Lint fix * Lint fix * update repo * Fix Pytorch matmul conversion when given (2-dim, N-dim) input pair * update measure.py * Lint fix * fix bug && add ut for pytorch matmul * update ut * Lint fix * update commit * Lint fix
…pache#7845) * [AutoScheduler] Fix incorrectly array context device and hide info at the beginning * Lint fix * Lint fix * update repo * Fix Pytorch matmul conversion when given (2-dim, N-dim) input pair * update measure.py * Lint fix * fix bug && add ut for pytorch matmul * update ut * Lint fix * update commit * Lint fix
…pache#7845) * [AutoScheduler] Fix incorrectly array context device and hide info at the beginning * Lint fix * Lint fix * update repo * Fix Pytorch matmul conversion when given (2-dim, N-dim) input pair * update measure.py * Lint fix * fix bug && add ut for pytorch matmul * update ut * Lint fix * update commit * Lint fix
…pache#7845) * [AutoScheduler] Fix incorrectly array context device and hide info at the beginning * Lint fix * Lint fix * update repo * Fix Pytorch matmul conversion when given (2-dim, N-dim) input pair * update measure.py * Lint fix * fix bug && add ut for pytorch matmul * update ut * Lint fix * update commit * Lint fix
…pache#7845) * [AutoScheduler] Fix incorrectly array context device and hide info at the beginning * Lint fix * Lint fix * update repo * Fix Pytorch matmul conversion when given (2-dim, N-dim) input pair * update measure.py * Lint fix * fix bug && add ut for pytorch matmul * update ut * Lint fix * update commit * Lint fix
This PR change the matmul conversion in PyTorch when given (2-dim, N-dim) input pair (N>2).
Original implement conversion [2 dim matrix * 3 dim tensor] into [batch matmul], however it can be implemented by a simple [matmul] with [2 dim matrix * 2 dim matrix (reshape 3 dim tensor into 2 dim matrix)].
cc @jcf94