vanilla training and adversarial training in PyTorch
-
Updated
Feb 19, 2022 - Python
vanilla training and adversarial training in PyTorch
Exploring the concept of "adversarial attacks" on deep learning models, specifically focusing on image classification using PyTorch. Implementing and demonstrating the Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) attacks against a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN) trained on the MNIST.
Implementations for several white-box and black-box attacks.
Adversarial defense by retreaval-based methods
"Neural Computing and Applications" Published Paper (2023)
Add a description, image, and links to the pgd-attack topic page so that developers can more easily learn about it.
To associate your repository with the pgd-attack topic, visit your repo's landing page and select "manage topics."