Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MRG] Sparse L1 Descent for PyTorch #1163

Merged
merged 8 commits into from
Mar 18, 2021

Conversation

iamgroot42
Copy link
Contributor

@iamgroot42 iamgroot42 commented May 14, 2020

Implementing the Sparse L-1 Descent Algorithm in PyTorch.

@iamgroot42 iamgroot42 changed the title [WIP] Adding Sparse L1 Descent for PyTorch [MRG] Sparse L1 Descent for PyTorch May 15, 2020
@iamgroot42
Copy link
Contributor Author

Implementing the Sparse L-1 Descent Algorithm in PyTorch, ported from the TF version.
All tests ported from TF version.

@nick-jia
Copy link
Member

nick-jia commented Feb 9, 2021

Hello @iamgroot42, thanks a lot for yout contribution and sorry for the late reply. I read your codes and it looks great! Just a few things I want to discuss with you:

  1. Regarding sanity check, do you mind including the assertion at line 51 of cleverhans/future/torch/attacks/sparse_l1_descent.py in the asserts list, and set default sanity_checks to False?
  2. I would suggest add grad_sparsity = torch.tensor(grad_sparsity) after line 61, since grad_sparsity may not have attribute 'shape'
  3. Comparing to the tf version (see line 495 of cleverhans/utils_tf.py), the random initialization at line 88 of cleverhans/future/torch/attacks/sparse_l1_descent.py seems to miss the scaling step.

Thank you again!

@nick-jia nick-jia merged commit 43af686 into cleverhans-lab:master Mar 18, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants