Skip to content

Latest commit

 

History

History
83 lines (52 loc) · 4.73 KB

README.md

File metadata and controls

83 lines (52 loc) · 4.73 KB

SSL-backdoor-BLTO

🔥 🔥 🔥 Hello guys! Thanks for your reading! This is the code repo for our ICLR 2024 paper "Backdoor Contrastive Learning via Bi-level Trigger Optimization". 🔥 🔥 🔥

Introduction

  1. The victim of our work: our work focuses on backdoor issues against unsupervised contrastive learning pipeline (e.g., MoCo, BYOL, SimSiam), especially encoder pretraining stage.

  2. The path of our attack: we insert some data with trigger to the victim's training dataset, then the victim will gain an backdoored encoder via unsupervised contrastive learning. When the attack input data with trigger into the encoder, the encoder will output the vector similar to the target category's, leading to the follow-up misclassification in the downstream task.

  3. The motivation of our work: previous attack cannot effectively compromise the victim's unsupervised training process, leading to a poor attack performance (shown as follows). Therefore, we'd like to find those triggers that can effective compromise these unsupervised training pipelines.

Methodology

We utilize bi-level trigger optimization (BLTO) to optimzie our backdoor trigger. To be specific, the framework of the training process is shown as follow:

Very classical bi-level optimization pipeline, isn' it? And more details can be found in our paper!

Fast lane to our pre-trained "triggers"

Optimizing trigger might be painful and timewasting to some trainers, hence here we directly provide some trained triggers in ./Xeon_checkpoint; they can be utilized to attack CIFAR-10 (target is truck) and ImageNet100 (target is Nautilus) unsupervised contrastive training (such as MoCo, Simsiam, BYOL, and SimCLR)!

Let's begin by attack an CIFAR-10 SimSiam training!

  1. First, we prepare the poisoned training data for the victim!
    > cd ./Trigger/Generator_from_TTA
    > python Generate_CIAFR10_using_TTA_origin.py --utilze_Trigger_place ../../Xeon_checkpoint/CIFAR_10/Net_G_ep400_CIFAR_10_Truck.pt

Then, you will get a poisoned CIFAR-10 dataset in ./Trigger/Generator_from_TTA/output/poisoned_CIFAR-10, the poisoning rate is 1% by default, the attack goal is "truck" (id is 9). Meanwhile, you will download a clean CIFAR-10 dataset in ./Trigger/Generator_from_TTA/datasets/CIFAR-10.

  1. Second, you can utilize the crafted poisoned CIFAR-10 dataset to attack some unsupervised training pipeline. Here we provide a sample: the victim utilize Simsiam to train an resnet-18 on his/her encoder!
    > cd ./Dirty_code_for_attack
    > python Simsiam_backdoor_eval.py --data_dir ../Trigger/Generator_from_TTA/datasets/CIFAR-10 --save_dir ./outputs --net_G_place ../Xeon_checkpoint/CIFAR_10/Net_G_ep400_CIFAR_10_Truck.pt --device 0 --poisoned_dataset_dir ../Trigger/Generator_from_TTA/output/poisoned_CIFAR-10

Then you can see the backdoor accuracy (ACC on benign samples) and attack success rate (ASR) throughout the training (totally 800 epochs, you can see these metrics per epoch)!

...

Epoch: 0, loss: -0.26687145233154297, ACC===>, 34.36472039473684, ASR===>18.996710526315788
Epoch: 10, loss: -0.6419734954833984, ACC===>, 43.21546052631579, ASR===>12.386924342105262
Epoch: 20, loss: -0.7151311039924622, ACC===>, 51.788651315789465, ASR===>19.305098684210524
Epoch: 30, loss: -0.7409721612930298, ACC===>, 59.42639802631579, ASR===>35.98889802631579
Epoch: 40, loss: -0.767180860042572, ACC===>, 63.90830592105263, ASR===>62.2327302631579
Epoch: 50, loss: -0.7961236834526062, ACC===>, 65.31661184210526, ASR===>88.36348684210526

...

Future Implementations

We plan to publish more code pipelines about our work, such as the implementation of ImageNet-100 attack, and the elegant code for optimizing the backdoor trigger.

Iimitation and future work

Our optimization pipeline, though can craft strong and effective triggers, is not stable though. The attacker may need to store some trained results and select the most competitive ones to raise the attack, which might be troublesome. We are glad to find future works that can deal with this issue!

Citation

We are glad if you think our work is interesting or valuable, the format of bitex is shown as follows:

@inproceedings{
sun2024backdoor,
title={Backdoor Contrastive Learning via Bi-level Trigger Optimization},
author={Weiyu Sun and Xinyu Zhang and Hao LU and Ying-Cong Chen and Ting Wang and Jinghui Chen and Lu Lin},
booktitle={The Twelfth International Conference on Learning Representations},
year={2024},
url={https://openreview.net/forum?id=oxjeePpgSP}
}