This code is for reproducing the results in the paper, Saliency Attack: Towards Imperceptible Black-box Adversarial Attack, just accepted by ACM Transactions on Intelligent Systems and Technology.
- Python 3.6
- TensorFlow 1.15.0 (with GPU support)
- opencv-python
- Pillow
- Install the required libraries:
pip install -r requirements.txt
- Download ImageNet validation dataset (images and corresponding labels). Note that the validation images must be contained within a folder named
val
and the filename of validation labels must beval.txt
.
- For images
mkdir val
wget https://image-net.org/data/ILSVRC/2012/ILSVRC2012_img_val.tar
tar -xf ILSVRC2012_img_val.tar -C val
- For labels
wget http://dl.caffe.berkeleyvision.org/caffe_ilsvrc12.tar.gz
tar -xvzf caffe_ilsvrc12.tar.gz val.txt
-
Place the directory
val
and the fileval.txt
in the same directory. -
Download a pretrained Inception-v3 model from Tensorflow model library and decompress it.
wget http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz
tar -xvzf inception_v3_2016_08_28.tar.gz
- Set
IMAGENET_PATH
inmain.py
andMODEL_DIR
intools/inception_v3_imagenet.py
to the locations of the dataset and the model respectively.
- For saliency maps
- We provide 1000 saliency maps in the directory 'saliency-maps' for test. They are generated by Pyramid Feature Attention Network for Saliency Detection, as introduced in our paper. We also provide the implementation of generating and saving saliency maps with different thresholds.
python main.py --sample_size 1000 --epsilon 0.05 --max_queries 10000 --block_size 16