This project focuses on image classification using a pre-trained convolutional neural network and interprets the model's predictions using SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-Agnostic Explanations). The project leverages the Oxford-IIIT Pet Dataset for multi-class classification, demonstrating how to train a robust image classifier and explain its predictions using interpretable AI methods.
-
Image Classification:
- Utilizes a pre-trained convolutional neural network (e.g., ResNet, VGG, or another backbone) to classify pet images into multiple categories.
-
Model Training and Validation:
- Includes training a model on the Oxford-IIIT Pet Dataset with appropriate data augmentation and validation strategies.
-
Explainability:
- Implements SHAP and LIME to provide detailed insights into the model's predictions:
- SHAP highlights pixel importance for predictions.
- LIME explains predictions by locally perturbing image inputs.
- Implements SHAP and LIME to provide detailed insights into the model's predictions:
-
Visualization:
- Provides clear visualizations of model predictions and the explanations generated by SHAP and LIME.
To run the project, ensure the following libraries are installed:
- Python 3.x
- TensorFlow or PyTorch (depending on the model implementation)
- NumPy
- Matplotlib
- SHAP
- LIME
- scikit-learn
- OpenCV (optional, for image processing)
-
Clone the repository:
git clone https://github.com/VivianeLe/Image_Classification_SHAP_Lime.git cd Image_Classification_SHAP_Lime -
Open the Jupyter Notebook:
jupyter notebook OxfordPet-Classification.ipynb
-
Run the cells sequentially to:
- Load and preprocess the Oxford-IIIT Pet Dataset.
- Train the classification model.
- Visualize SHAP and LIME explanations for selected predictions.
- Classification Accuracy: The model achieves high accuracy on the validation set, showcasing its ability to classify pet images correctly.
- Explainability Outputs:
- SHAP produces heatmaps highlighting important image regions influencing the model's decisions.
- LIME generates perturbed explanations for individual image predictions.
- Oxford-IIIT Pet Dataset
- SHAP Documentation
- LIME Documentation
- Relevant research papers and blogs on model interpretability.
Contributions are welcome! Feel free to submit issues or pull requests for improvements or new features.
This project is licensed under the MIT License. See the LICENSE file for details.