-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RELAY,TOPI] Add scatter_nd op #6854
Conversation
@@ -259,10 +259,11 @@ class TypeSolver::Unifier : public TypeFunctor<Type(const Type&, const Type&)> { | |||
|
|||
if (mismatches.size() != 0) { | |||
auto err = Diagnostic::Error(this->span); | |||
err << "in particular "; | |||
err << "The Relay type checker is unable to show the following types match.\n"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The original error formatting may have been intentional to show some kind of "traceback" style reporting, but I may be wrong. cc @jroesch who has a better idea
Can you try to fix the CI errors? Left some comments but overall looks really good to me, +1 on the thorough documentation! For reference, this is based off of MXNet's Do the semantics of MXNet's |
cc @mbrookhart |
MXNet |
@antinucleon @altanh In this implementation, repeated indices are summed instead of having non-deterministic behavior. I put this on the docs for |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like the gather_nd_grad unit test has a type error?
How hard would it be to extend this to cuda as part of this PR? Since you're already using IR builder, you might be able to join them pretty easiliy.
Hmm yeah seems like the gradient unit test is failing, can you investigate? Also I recommend reverting the diagnostics error message change, looks like there is a unit test that depends on the specific error message. |
2a6514c
to
7ba042f
Compare
@mbrookhart I'd like to work on the cuda implementation separately. I think there is a bit of work to do there. |
d51a206
to
75d63da
Compare
@mbrookhart Actually, you've got your wish. I've added a simple x86 and cuda implementation. |
@antinucleon Do you want to review this? |
Scatter_nd is the inverse of gather_nd and also happens to be its gradient. The implementation here is not optimized. There are no cpu or gpu specific implementations.
We found some bugs in the shape relation and compute, @tkonolige will push a fix and test cases shortly |
7fdcd78
to
9ec74c6
Compare
@tqchen This PR is all ready to go. Is there a reviewer you could suggest? |
* [RELAY,TOPI] Add scatter_nd op Scatter_nd is the inverse of gather_nd and also happens to be its gradient. The implementation here is not optimized. There are no cpu or gpu specific implementations. * formatting * Fix tests * formatting * specify types on test * Fix grad test * scatter_nd cuda impl * cuda impl * x86 impl * formatting * fix shape rel * fix tests * formatting
* [RELAY,TOPI] Add scatter_nd op Scatter_nd is the inverse of gather_nd and also happens to be its gradient. The implementation here is not optimized. There are no cpu or gpu specific implementations. * formatting * Fix tests * formatting * specify types on test * Fix grad test * scatter_nd cuda impl * cuda impl * x86 impl * formatting * fix shape rel * fix tests * formatting
* [RELAY,TOPI] Add scatter_nd op Scatter_nd is the inverse of gather_nd and also happens to be its gradient. The implementation here is not optimized. There are no cpu or gpu specific implementations. * formatting * Fix tests * formatting * specify types on test * Fix grad test * scatter_nd cuda impl * cuda impl * x86 impl * formatting * fix shape rel * fix tests * formatting
Scatter_nd is the inverse of gather_nd and also happens to be its gradient. The implementation here is not optimized. There are no cpu or gpu specific implementations.
@altanh