You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, first of all thanks for your research and your well prepared code!
While implementing the code for my own custom model, I noticed that for linear layers, the relprop method often returns zero if there are few input features and alpha is set to 1.0. Unfortunately, this means that the relprop ends at this point, since all the remaining values result in zeros as well.
I did not find any specific references to the impact of the alpha value in your paper and the original LRP paper (Interpreting the Predictions of Complex ML Models by Layer-wise Relevance Propagation).
Are there any rules of thumb for the alpha beta ratio?
The text was updated successfully, but these errors were encountered:
Hi, first of all thanks for your research and your well prepared code!
While implementing the code for my own custom model, I noticed that for linear layers, the relprop method often returns zero if there are few input features and alpha is set to 1.0. Unfortunately, this means that the relprop ends at this point, since all the remaining values result in zeros as well.
Below is the code to reproduce this behavior.
Output:
I did not find any specific references to the impact of the alpha value in your paper and the original LRP paper (Interpreting the Predictions of Complex ML Models by Layer-wise Relevance Propagation).
Are there any rules of thumb for the alpha beta ratio?
The text was updated successfully, but these errors were encountered: