An ASR (Automatic Speech Recognition) adversarial attack repository.
-
Updated
Nov 7, 2023 - Jupyter Notebook
An ASR (Automatic Speech Recognition) adversarial attack repository.
vanilla training and adversarial training in PyTorch
Evaluating CNN robustness against various adversarial attacks, including FGSM and PGD.
Individual Study in Computer Architecture and Systems Laboratory (CASYS) with Prof.Jaehyuk Huh in 2021 Summer
This work is based on enhancing the robustness of targeted classifier models against adversarial attacks. To achieve this, a convolutional autoencoder-based approach is employed that effectively counters adversarial perturbations introduced to the input images.
Adversarial Sample Generation
Adversarial Network Attacks (PGD, pixel, FGSM) Noise on MNIST Images Dataset using Python (Pytorch)
Adversarial attacks on CNN using the FSGM technique.
This study was conducted in collaboration with the University of Prishtina (Kosovo) and the University of Oslo (Norway). This implementation is part of the paper entitled "Attack Analysis of Face Recognition Authentication Systems Using Fast Gradient Sign Method", published in the International Journal of Applied Artificial Intelligence by Taylo…
A classical or convolutional neural network model with adversarial defense protection
FGSM (Fast Gradient Sign Method) is an adversarial attack technique that adds small, calculated perturbations to input data to fool CNNs. Proposed by Ian Goodfellow in 2014, it generates adversarial examples to mislead the model's predictions.
An University Project for the AI4Cybersecurity class.
This repository contains the implementation of three adversarial example attacks including FGSM, noise, semantic attack and a defensive distillation approach to defense against the FGSM attack.
Exploring the concept of "adversarial attacks" on deep learning models, specifically focusing on image classification using PyTorch. Implementing and demonstrating the Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) attacks against a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN) trained on the MNIST.
Implementations for several white-box and black-box attacks.
Learning Adversarial Robustness in Machine Learning both Theory and Practice.
Adversarial Attacks on Image data
This repository contains the codebase for Jailbreaking Deep Models, which investigates the vulnerability of deep convolutional neural networks to adversarial attacks. The project systematically implements and analyzes Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), and localized patch-based attacks on the pretrained
Thesis notebooks for adversarial camouflage in autonomous vehicles (YOLOv8 + experiments)
Adversarial attacks to SRNet
Add a description, image, and links to the fgsm-attack topic page so that developers can more easily learn about it.
To associate your repository with the fgsm-attack topic, visit your repo's landing page and select "manage topics."