You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you so much for the wonderful library. I have one doubt about carlini_wagner attack though. The original paper talks about using 1/2 * (tanh (w) + 1) in order to ensure the delta values lie between 0 and 1. It seems here the code uses tanh only to ensure rescaling but the optimization is still done on the values of x and x + delta . In that case, why are we doing an extra clipping here? If tanh is only used to ensure values remain within the range, can we use a normal torch.clamp() function instead? Is there some reason to still use the tanh() function then?
I am a bit confused about the implementation and some pointers would be really appreciated.
The text was updated successfully, but these errors were encountered:
Thank you so much for the wonderful library. I have one doubt about carlini_wagner attack though. The original paper talks about using
1/2 * (tanh (w) + 1)
in order to ensure the delta values lie between 0 and 1. It seems here the code usestanh
only to ensure rescaling but the optimization is still done on the values ofx
andx + delta
. In that case, why are we doing an extra clipping here? If tanh is only used to ensure values remain within the range, can we use a normaltorch.clamp()
function instead? Is there some reason to still use the tanh() function then?I am a bit confused about the implementation and some pointers would be really appreciated.
The text was updated successfully, but these errors were encountered: