You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, downsampling is 32. Is there a way to change it to 16 and still use pre-trained weights?
EDIT: I ended up reducing stride in the block_args. Is this the best way? Is there a way to know the stride of which block (early or late stage) should be changed for minimal change in performance?
The text was updated successfully, but these errors were encountered:
Yes, that should be a good way. Regardless, if it is possible, you will want to finetune the network before you use it, because it has been trained with 32x downsampling. That being said, since you only did a tiny change, it should be very quick to finetune it.
Currently, downsampling is 32. Is there a way to change it to 16 and still use pre-trained weights?
EDIT: I ended up reducing stride in the block_args. Is this the best way? Is there a way to know the stride of which block (early or late stage) should be changed for minimal change in performance?
The text was updated successfully, but these errors were encountered: