You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Looking at the code, why is there an SE module and why was it not referenced in the DocFace paper?, From my limited understanding of tf-slim, does the following
with slim.arg_scope([slim.batch_norm, slim.dropout], is_training=phase_train):
print('input shape:', [dim.value for dim in images.shape])
net = conv_module(images, 0, [32, 64], scope='conv1')
print('module_1 shape:', [dim.value for dim in net.shape])
net = conv_module(net, 2, [64, 128], scope='conv2')
print('module_2 shape:', [dim.value for dim in net.shape])
net = conv_module(net, 4, [128, 256], scope='conv3')
print('module_3 shape:', [dim.value for dim in net.shape])
net = conv_module(net, 10, [256, 512], scope='conv4')
print('module_4 shape:', [dim.value for dim in net.shape])
net = conv_module(net, 6, [512], scope='conv5')
print('module_5 shape:', [dim.value for dim in net.shape])
actually use batch_norm and dropout in the model at all? How does the model know when and where to create the mentioned layers?
The text was updated successfully, but these errors were encountered:
Looking at the code, why is there an SE module and why was it not referenced in the DocFace paper?, From my limited understanding of tf-slim, does the following
actually use
batch_norm
anddropout
in the model at all? How does the model know when and where to create the mentioned layers?The text was updated successfully, but these errors were encountered: