You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
deepcell.model_zoo.fpn.__create_semantic_head which is used by default in your segmentation models applies ReLU activation function on the output if n_classes == 1. I would expect that segmentation models with single channel output use sigmoid activation in their last layer. Furthermore, applying ReLU makes the logits unusable for binary_cross_entropy_with_logits since sigmoid(relu(x)) has a lower bound of 0.5.
To Reproduce
Something like the below code would return values bound between [0.5, 1]
x = tf.constant(np.random.rand(1,256,256,2), tf.float32)
model = PanopticNet(
backbone="resnet50", input_shape=[256,256,2],
norm_method="std", num_semantic_classes=[1],
)
out = model(x)
print(tf.reduce_min(tf.sigmoid(out)))
Expected behavior
Expected would be that the output is run through sigmoid, not ReLU.
Desktop (please complete the following information):
OS: Win 10
deepcell 0.12.3
Python 3.8.0
Thanks for maintaining this library :)
The text was updated successfully, but these errors were encountered:
Describe the bug
deepcell.model_zoo.fpn.__create_semantic_head
which is used by default in your segmentation models applies ReLU activation function on the output ifn_classes == 1
. I would expect that segmentation models with single channel output use sigmoid activation in their last layer. Furthermore, applying ReLU makes the logits unusable forbinary_cross_entropy_with_logits
since sigmoid(relu(x)) has a lower bound of 0.5.To Reproduce
Something like the below code would return values bound between [0.5, 1]
Expected behavior
Expected would be that the output is run through sigmoid, not ReLU.
Desktop (please complete the following information):
Thanks for maintaining this library :)
The text was updated successfully, but these errors were encountered: