Skip to content
forked from yaodongyu/TRADES

TRADES (TRadeoff-inspired Adversarial DEfense via Surrogate-loss minimization)

License

Notifications You must be signed in to change notification settings

divyam02/TRADES

 
 

Repository files navigation

TRADES (TRadeoff-inspired Adversarial DEfense via Surrogate-loss minimization)

This is the code for the ICML'19 paper "Theoretically Principled Trade-off between Robustness and Accuracy" by Hongyang Zhang (CMU, TTIC), Yaodong Yu (University of Virginia), Jiantao Jiao (UC Berkeley), Eric P. Xing (CMU & Petuum Inc.), Laurent El Ghaoui (UC Berkeley), and Michael I. Jordan (UC Berkeley).

The methodology is the first-place winner of the NeurIPS 2018 Adversarial Vision Challenge (Robust Model Track).

The attack method transferred from TRADES robust model is the first-place winner of the NeurIPS 2018 Adversarial Vision Challenge (Targeted Attack Track).

Prerequisites

  • Python (3.6.4)
  • Pytorch (0.4.1)
  • CUDA
  • numpy

Install

We suggest to install the dependencies using Anaconda or Miniconda. Here is an exemplary command:

$ wget https://repo.anaconda.com/archive/Anaconda3-5.1.0-Linux-x86_64.sh
$ bash Anaconda3-5.1.0-Linux-x86_64.sh
$ source ~/.bashrc
$ conda install pytorch=0.4.1

TRADES: A New Loss Function for Adversarial Training

What is TRADES?

TRADES minimizes a regularized surrogate loss L(.,.) (e.g., the cross-entropy loss) for adversarial training:

Important: the surrogate loss L(.,.) in the second term should be classification-calibrated according to our theory, in contrast to the L2 loss used in Adversarial Logit Pairing.

The first term encourages the natural error to be optimized by minimizing the "difference" between f(X) and Y , while the second regularization term encourages the output to be smooth, that is, it pushes the decision boundary of classifier away from the sample instances via minimizing the "difference" between the prediction of natural example f(X) and that of adversarial example f(X′). The tuning parameter β plays a critical role on balancing the importance of natural and robust errors.

Left figure: decision boundary by natural training. Right figure: decision boundary by TRADES.

How to use TRADES to train robust models?

Natural training:

def train(args, model, device, train_loader, optimizer, epoch):
    model.train()
    for batch_idx, (data, target) in enumerate(train_loader):
        data, target = data.to(device), target.to(device)
        optimizer.zero_grad()
        loss = F.cross_entropy(model(data), target)
        loss.backward()
        optimizer.step()

Adversarial training by TRADES:

To apply TRADES, cd into the directory, put 'trades.py' to the directory. Replace F.cross_entropy() above with trades_loss():

from trades import trades_loss

def train(args, model, device, train_loader, optimizer, epoch):
    model.train()
    for batch_idx, (data, target) in enumerate(train_loader):
        data, target = data.to(device), target.to(device)
        optimizer.zero_grad()
        # calculate robust loss - TRADES loss
        loss = trades_loss(model=model,
                           x_natural=data,
                           y=target,
                           optimizer=optimizer,
                           step_size=args.step_size,
                           epsilon=args.epsilon,
                           perturb_steps=args.num_steps,
                           beta=args.beta,
			   distance='l_inf')
        loss.backward()
        optimizer.step()

Arguments:

  • step_size: step size for perturbation
  • epsilon: limit on the perturbation size
  • num_steps: number of perturbation iterations for projected gradient descent (PGD)
  • beta: trade-off regularization parameter
  • distance: type of perturbation distance, 'l_inf' or 'l_2'

The trade-off regularization parameter beta can be set in [1, 10]. Larger beta leads to more robust and less accurate models.

Basic MNIST example (adversarial training by TRADES):

python mnist_example_trades.py

We adapt main.py in [link] to our new loss trades_loss() during training.

Running demos

Adversarial training:

  • Train WideResNet-34-10 model on CIFAR10:
  $ python train_trades_cifar10.py
  • Train CNN model (four convolutional layers + three fully-connected layers) on MNIST:
  $ python train_trades_mnist.py
  • Train CNN model (two convolutional layers + two fully-connected layers) on MNIST (digits '1' and '3') for binary classification problem:
  $ python train_trades_mnist_binary.py

Robustness evaluation:

  • Evaluate robust WideResNet-34-10 model on CIFAR10 by FGSM-20 attack:
  $ python pgd_attack_cifar10.py
  • Evaluate robust CNN model on MNIST by FGSM-40 attack:
  $ python pgd_attack_mnist.py

Experimental results

Results in the NeurIPS 2018 Adversarial Vision Challenge [link]

TRADES won the 1st place out of 1,995 submissions in the NeurIPS 2018 Adversarial Vision Challenge (Robust Model Track) on the Tiny ImageNet dataset, surpassing the runner-up approach by 11.41% in terms of L2 perturbation distance.

Top-6 results (out of 1,995 submissions) in the NeurIPS 2018 Adversarial Vision Challenge (Robust Model Track). The vertical axis represents the mean L2 perturbation distance that makes robust models fail to output correct labels.

Results in the Unrestricted Adversarial Examples Challenge [link]

In response to the Unrestricted Adversarial Examples Challenge, we implement TRADESv2 (a variant of TRADES with extra spatial-transformation-invariant considerations) on the bird-or-bicycle dataset without adversarial pretraining on the ImageNet dataset.

All percentages below correspond to the model's accuracy at 80% coverage.

Defense Submitted by Clean data Common corruptions Spatial grid attack SPSA attack Boundary attack Submission Date
Pytorch ResNet50
(trained on bird-or-bicycle extras)
TRADESv2 100.0% 100.0% 99.5% 100.0% 95.0% Jan 17th, 2019 (EST)
Keras ResNet
(trained on ImageNet)
Google Brain 100.0% 99.2% 92.2% 1.6% 4.0% Sept 29th, 2018
Pytorch ResNet
(trained on bird-or-bicycle extras)
Google Brain 98.8% 74.6% 49.5% 2.5% 8.0% Oct 1st, 2018

To download our checkpoint with the best performance:

  • Step 1: Clone and install the dependencies following the instructions

  • Step 2: Download our evaluation code:

    git clone https://github.com/xincoder/google_attack.git
  • Step 3: Download our pre-trained weight: [download link] and put it into the folder "google_attack"

  • Step 4: Run the code:

    python eval_hongyangxin.py

TRADES + Random Smoothing [code]

TRADES + Random Smoothing achieves SOTA certified robustness in norm at radius 2/255.

  • Results on certified robustness at radius 2/255 on CIFAR-10:
Method Robust Accuracy Natural Accuracy
TRADES + Random Smoothing 62.6% 78.7%
Salman et al. (2019) 60.8% 82.1%
Zhang et al. (2020) 54.0% 72.0%
Wong et al. (2018) 53.9% 68.3%
Mirman et al. (2018) 52.2% 62.0%
Gowal et al. (2018) 50.0% 70.2%
Xiao et al. (2019) 45.9% 61.1%

Want to attack TRADES? No problem!

TRADES is a new baseline method for adversarial defenses. We welcome various attack methods to attack our defense models. We provide checkpoints of our robust models on MNIST dataset and CIFAR dataset. On both datasets, we normalize all the images to [0, 1].

How to download our CNN checkpoint for MNIST and WRN-34-10 checkpoint for CIFAR10?

cd TRADES
mkdir checkpoints
cd checkpoints

Then download our pre-trained model

[download link] (CIFAR10)

[download link] (MNIST)

and put them into the folder "checkpoints".

How to download MNIST dataset and CIFAR10 dataset?

cd TRADES
mkdir data_attack
cd data_attack

Then download the MNIST and CIFAR10 datasets

[download link] (CIFAR10_X)

[download link] (CIFAR10_Y)

[download link] (MNIST_X)

[download link] (MNIST_Y)

and put them into the folder "data_attack".

About the datasets

All the images in both datasets are normalized to [0, 1].

  • cifar10_X.npy -- a (10,000, 32, 32, 3) numpy array
  • cifar10_Y.npy -- a (10,000, ) numpy array
  • mnist_X.npy -- a (10,000, 28, 28) numpy array
  • mnist_Y.npy -- a (10,000, ) numpy array

Load our CNN model for MNIST

from models.small_cnn import SmallCNN

device = torch.device("cuda")
model = SmallCNN().to(device)
model.load_state_dict(torch.load('./checkpoints/model_mnist_smallcnn.pt'))

For our model model_mnist_smallcnn.pt, the limit on the perturbation size is epsilon=0.3 (L_infinity perturbation distance).

White-box leaderboard

Attack Submitted by Natural Accuracy Robust Accuracy Time
Square Attack Andriushchenko Maksym 99.48% 92.58% Mar 10, 2020
fab-attack Francesco Croce 99.48% 93.33% Jun 7, 2019
FGSM-1,000 (initial entry) 99.48% 95.60% -
FGSM-40 (initial entry) 99.48% 96.07% -

How to attack our CNN model on MNIST?

  • Step 1: Download mnist_X.npy and mnist_Y.npy.
  • Step 2: Run your own attack on mnist_X.npy and save your adversarial images as mnist_X_adv.npy.
  • Step 3: put mnist_X_adv.npy under ./data_attack.
  • Step 4: run the evaluation code,
  $ python evaluate_attack_mnist.py

Note that the adversarial images should in [0, 1] and the largest perturbation distance is epsilon = 0.3(L_infinity).

Load our WideResNet (WRN-34-10) model for CIFAR10

from models.wideresnet import WideResNet

device = torch.device("cuda")
model = WideResNet().to(device)
model.load_state_dict(torch.load('./checkpoints/model_cifar_wrn.pt'))

For our model model_cifar_wrn.pt, the limit on the perturbation size is epsilon=0.031 (L_infinity perturbation distance).

White-box leaderboard

Attack Submitted by Natural Accuracy Robust Accuracy Time
ODI-PGD Yusuke Tashiro 84.92% 53.01% Feb 16, 2020
MultiTargeted Sven Gowal 84.92% 53.07% Oct 31, 2019
fab-attack Francesco Croce 84.92% 53.44% Jun 7, 2019
FGSM-1,000 (initial entry) 84.92% 56.43% -
FGSM-20 (initial entry) 84.92% 56.61% -
MI-FGSM (initial entry) 84.92% 57.95% -
FGSM (initial entry) 84.92% 61.06% -
DeepFool (L_inf) (initial entry) 84.92% 61.38% -
CW (initial entry) 84.92% 81.24% -
DeepFool (L_2) (initial entry) 84.92% 81.55% -
LBFGSAttack (initial entry) 84.92% 81.58% -

How to attack our WRM-34-10 model on CIFAR10?

  • Step 1: Download cifar10_X.npy and cifar10_Y.npy.
  • Step 2: Run your own attack on cifar10_X.npy and save your adversarial images as cifar10_X_adv.npy.
  • Step 3: put cifar10_X_adv.npy under ./data_attack.
  • Step 4: run the evaluation code,
  $ python evaluate_attack_cifar10.py

Note that the adversarial images should in [0, 1] and the largest perturbation distance is epsilon = 0.031(L_infinity).

Reference

For technical details and full experimental results, please check the paper.

@article{zhang2019theoretically, 
	author = {Hongyang Zhang and Yaodong Yu and Jiantao Jiao and Eric P. Xing and Laurent El Ghaoui and Michael I. Jordan}, 
	title = {Theoretically Principled Trade-off between Robustness and Accuracy}, 
	journal = {arXiv preprint arXiv:1901.08573},
	year = {2019}
}

Contact

Please contact yyu@eecs.berkeley.edu and hongyanz@ttic.edu if you have any question on the codes. Enjoy!

About

TRADES (TRadeoff-inspired Adversarial DEfense via Surrogate-loss minimization)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%