-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Algorithm problem #1
Comments
Hello, thanks for your interest in our paper! The gradients will be accumulated for model parameters and perturbations M times in our algorithm. To way to realize it is to loss.backward() M times, but we only do gradient ascent for perturbations in the loop, without optimizing the model parameters. After we go outside the loop we optmize model parameter once. These match both our Algorithm 1 in the paper and our code. Hope this makes sense! |
Hi:
I want to learn which conference or magazine do you publish this paper on ?
…------------------ 原始邮件 ------------------
发件人: "devnkong/FLAG" <notifications@github.com>;
发送时间: 2020年11月16日(星期一) 上午8:45
收件人: "devnkong/FLAG"<FLAG@noreply.github.com>;
抄送: "天津工业大学王泽宇"<1051533398@qq.com>;"Author"<author@noreply.github.com>;
主题: Re: [devnkong/FLAG] Algorithm problem (#1)
Hello, thanks for your interest in our paper!
The gradients will be accumulated for model parameters and perturbations M times in our algorithm. To way to realize it is to loss.backward() M times, but we only do gradient ascent for perturbations in the loop, without optimizing the model parameters. After we go outside the loop we optmize model parameter once. These match both our Algorithm 1 in the paper and our code.
Hope this makes sense!
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
Hi:
After reading your paper and code, I have a question.
In ogbn_protiens/attacks.py, function flag, I think it's the core of this algorithm. While In this function, you first calculate a loss under data and perturb, in next args.m -1 times, calculate args.m -1 times loss and accumulate gradients of perturb. In last, the total loss backwards. In your code, accumulating several times gradients of perturb while updating model parameters once. It seems that this doesn't match your paper!
In your paper algotithm1, from line 6 to 8, the adversarial loop run one time, computing both the gradient for perturb and parameter simultaneously.
The text was updated successfully, but these errors were encountered: