Neural Networks for Data Science Applications: Saliency Maps Interpretability
This project explores the interpretability of neural networks in the context of data science applications, focusing on the use of Saliency Maps to understand model predictions.
Saliency Maps provide insights into which features or parts of an input contribute most to a neural network's output. In this project, we leverage Saliency Maps to enhance the interpretability of our models.
The repository contains the project's notebook. The project was carried out using the Google Colab platform. Here is an image extracted from the corresponding notebook, displaying the original image along with its associated saliency maps.