Skip to content

[torch] Update torch.bmm to use accumulator type #3924

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Dec 19, 2024

Conversation

rsuderman
Copy link
Contributor

Batch matmul was using the result type as the accumulator. Updated to use the preferred accumulator based on input type.

Batch matmul was using the result type as the accumulator. Updated to
use the preferred accumulator based on input type.
@MaheshRavishankar
Copy link
Contributor

Do we need a test?

@rsuderman
Copy link
Contributor Author

Do we need a test?

There should be an existing testing. Getting the f16 vs f32 numerical comparison for accumulators would require a much larger matmul than we typically like to include in our e2e tests. For torch-mlir we only do numerical evaluations against the torch backend.

@rsuderman rsuderman merged commit 061bbc5 into llvm:main Dec 19, 2024
3 checks passed
rahuls-cerebras added a commit that referenced this pull request Jan 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants