Skip to content
This repository was archived by the owner on Aug 15, 2019. It is now read-only.

Conversation

annxingyuan
Copy link
Collaborator

@annxingyuan annxingyuan commented Aug 2, 2019

Prelu weights are typically broadcasted from final dimension of x.

E.g. some typical inputs:
x.shape: [1, 64, 64, 16]
a.shape: [1, 1, 16]

Holding off on implementing gradients for fused prelu because the alpha gradient needs access to an intermediate tensor that's buried in the fused kernel.

To see the logs from the Cloud Build CI, please join either
our discussion
or announcement mailing list.


This change is Reviewable

@googlebot
Copy link

We found a Contributor License Agreement for you (the sender of this pull request), but were unable to find agreements for all the commit author(s) or Co-authors. If you authored these, maybe you used a different email address in the git commits than was used to sign the CLA (login here to double check)? If these were authored by someone else, then they will need to sign a CLA as well, and confirm that they're okay with these being contributed to Google.
In order to pass this check, please resolve this problem and then comment @googlebot I fixed it.. If the bot doesn't comment, it means it doesn't think anything has changed.

ℹ️ Googlers: Go here for more info.

@annxingyuan annxingyuan self-assigned this Aug 4, 2019
@annxingyuan annxingyuan closed this Aug 4, 2019
@annxingyuan annxingyuan deleted the prelu_activation branch August 4, 2019 20:01
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants