Skip to content

Conversation

@masahi
Copy link
Member

@masahi masahi commented Mar 14, 2022

The first two fixes above make it possible to run int8 bert-base end to end via autotvm or cublas on tensorcore (cutlass already works without error).

@Laurawly @vinx13 @junrushao1994 @comaniac

@masahi masahi force-pushed the cutlass-int8-fix branch from 4879857 to 12989d6 Compare March 14, 2022 06:52
Copy link
Contributor

@jwfromm jwfromm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those tabs can be dangerous. Thanks for the fix Masa, LGTM!

@jwfromm jwfromm merged commit 7d5ef84 into apache:main Mar 14, 2022
shingjan pushed a commit to shingjan/tvm that referenced this pull request Mar 16, 2022
* [CUTLASS] avoid tile size 256 for int8 + align1 case

* allow selecting int8 dense strategy for vulkan

* fixed cublas batch matmul for int8

* fixed int8 dense tensorcore strategy

* add cutlass conv align1 + int8 case

* support int8 mixed precision cublas bmm

* black
pfk-beta pushed a commit to pfk-beta/tvm that referenced this pull request Apr 11, 2022
* [CUTLASS] avoid tile size 256 for int8 + align1 case

* allow selecting int8 dense strategy for vulkan

* fixed cublas batch matmul for int8

* fixed int8 dense tensorcore strategy

* add cutlass conv align1 + int8 case

* support int8 mixed precision cublas bmm

* black
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants