-
Notifications
You must be signed in to change notification settings - Fork 106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem about the dense layer #13
Comments
This is now a quite old comment but I'll answer it anyway and hope it helps you or someone else reading. The Keras Dense layer takes an The reason why the number of outputs is 2, is because the model is classifying two classes: person and non-person as described in the README. This architecture is used with a one-hot encoding type of labelling, i.e. the label I'm not sure from your question if you were wondering about this as well, but in addition, there is no need for any other hidden layers after the GAP as GAP can be used to replace any FC layers after the convolution layers, instead connecting straight from the GAP to the output layer with softmax activation. See this page for more details: https://paperswithcode.com/method/global-average-pooling. |
I had a problem with the approach which is clear in this line:
keras-cam/model.py
Line 59 in 2b7ada2
As it is mentioned in the paper, trained weights in this layer are used for a weighted sum over the last produced activation maps.
For predicting a non-linear function and class score in an MLP, there should be at least two layers (one hidden layer and an output layer like Softmax).
But here, right after the GAP layer, only one FC layer with two units is added for classification.
Can anyone explain the reason?
And why the number of units is 2?
The text was updated successfully, but these errors were encountered: