Skip to content

Can Adversarial training defend against Poisoning attacks?

Notifications You must be signed in to change notification settings

gurbaaz27/CS776A-Course-Project

Repository files navigation

CS776A: Deep Learning for Computer Vision

Course Project: Adversarial Training Is All You Need

Table of Contents

  1. Code
  2. Implementation References
  3. Presentations
  4. Team Details

Code

  • Abstract is present as CS776_Project_Abstract_grp1.pdf
  • Final Report is present as CS776_Project_Report_grp1.pdf
  • Presentations are present in presentations/ directory.
  • Colab notebooks are present in offline form in noteboooks/ directory. You may upload them on colab to run, or simply install all the dependencies to run locally.
  • Trained weights are present in weights/ directory.
  • The attacks and trainers have been implemented and are present in src/ directory.
  • Sample tests showing how to import and run code from src/ are present in test/ directory.
  • To install the dependencies present in requirements.txt, run the following code
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt

Implementation References

  1. Projected Gradient Descent : https://arxiv.org/abs/1706.06083
  2. Fast Gradient : https://arxiv.org/abs/1412.6572
  3. Poisoning Backdoor Attack : https://arxiv.org/abs/1708.06733
  4. Clean Label Backdoor Attack : https://people.csail.mit.edu/madry/lab/cleanlabel.pdf
  5. Adversarial Trainer: https://arxiv.org/abs/1705.07204

Presentations

Team Details

  • Name : Four of a Kind

  • Members :

Name Roll No.
Antreev Singh Brar 190163
Anubhav Kalyani 190164
Gurbaaz Singh Nandra 190349
Pramodh V Gopalan 190933