Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

XLA (TPU) Support #36

Open
aidangomez opened this issue Jul 11, 2019 · 3 comments
Open

XLA (TPU) Support #36

aidangomez opened this issue Jul 11, 2019 · 3 comments

Comments

@aidangomez
Copy link

Hey there,

Was wondering if there are plans to support TPUs in the future via XLA kernels?

@scott-gray
Copy link
Contributor

That would be up to Google to support. They don't expose a programmable ISA externally. Also, it's unclear if they could support a block-size below 128x128.

@aidangomez
Copy link
Author

Would tf2xla not be enough to work with?https://github.com/tensorflow/tensorflow/tree/master/tensorflow/compiler/tf2xla/kernels

Looks as though they expose a fair amount there; although I'm not familiar enough with your block sparse implementations to say what you'd require.

@scott-gray
Copy link
Contributor

XLA is great for lighter weight primitives like element-wise, reduction, broadcast, etc. But blocksparse matmul/transformer is much more akin to a convolution primitive. And you can see that XLA implements convolution by calling into a lower level op (eg ConvGeneralDilated) written more directly with their unexposed ISA.

That being said, this stuff isn't all that hard to express in python/numpy and should be easy to implement. https://github.com/openai/blocksparse/blob/master/blocksparse/transformer.py#L186-L305

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants