Mentor : Dr. Soumya Dutta
Visualizing Impact of Uncertainty and Adverasarial Attack on Deep Classifier Models
This project aims to address issues related to the quality, confidence and robustness associated with predictions made by deep classifier models based on convolutional neural networks. To achieve this, a visual analytics approach has been taken to allow users to understand how uncertainty estimation techniques and adversarial attacks affect the performance of these models. The project has resulted in the development of the Model Vizualizer Website that can show the behavior of the classifier under different circumstances, such as uncertainty and adversarial attack. By exploring factors such as model prediction confidence and accuracy, the tool/website can visually compare the behavior of a model under adversarial attack to that of a benign model.
This repository contains codes/files for :-
- CNN Model used for MNIST dataset
- Alexnet Model used for STL-10 dataset
- Source Code for the Front-End of the Model Vizualizer Website
- UGP Presentation
- UGP Report