An ASR (Automatic Speech Recognition) adversarial attack repository.
-
Updated
Nov 7, 2023 - Jupyter Notebook
An ASR (Automatic Speech Recognition) adversarial attack repository.
vanilla training and adversarial training in PyTorch
Evaluating CNN robustness against various adversarial attacks, including FGSM and PGD.
Individual Study in Computer Architecture and Systems Laboratory (CASYS) with Prof.Jaehyuk Huh in 2021 Summer
This work is based on enhancing the robustness of targeted classifier models against adversarial attacks. To achieve this, a convolutional autoencoder-based approach is employed that effectively counters adversarial perturbations introduced to the input images.
Adversarial Sample Generation
FGSM (Fast Gradient Sign Method) is an adversarial attack technique that adds small, calculated perturbations to input data to fool CNNs. Proposed by Ian Goodfellow in 2014, it generates adversarial examples to mislead the model's predictions.
Adversarial Network Attacks (PGD, pixel, FGSM) Noise on MNIST Images Dataset using Python (Pytorch)
A classical or convolutional neural network model with adversarial defense protection
Adversarial attacks on CNN using the FSGM technique.
An University Project for the AI4Cybersecurity class.
This repository contains the implementation of three adversarial example attacks including FGSM, noise, semantic attack and a defensive distillation approach to defense against the FGSM attack.
This study was conducted in collaboration with the University of Prishtina (Kosovo) and the University of Oslo (Norway). This implementation is part of the paper entitled "Attack Analysis of Face Recognition Authentication Systems Using Fast Gradient Sign Method", published in the International Journal of Applied Artificial Intelligence by Taylo…
Adversarial attacks on a deep neural network trained on ImageNet
Implementations for several white-box and black-box attacks.
Learning Adversarial Robustness in Machine Learning both Theory and Practice.
This project demonstrates adversarial attacks on deep neural networks trained on the CIFAR-10 dataset
Adversarial-Attacks-and-Defence
The Fast Gradient Sign Method (FGSM) combines a white box approach with a misclassification goal. It tricks a neural network model into making wrong predictions. We use this technique to anonymize images.
Adversarial Attacks on Image data
Add a description, image, and links to the fgsm-attack topic page so that developers can more easily learn about it.
To associate your repository with the fgsm-attack topic, visit your repo's landing page and select "manage topics."