Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
-
Updated
Apr 7, 2025 - Python
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
Model interpretability and understanding for PyTorch
StellarGraph - Machine Learning on Graphs
Algorithms for explaining machine learning models
Class activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++, Score-CAM, SS-CAM, IS-CAM, XGrad-CAM, Layer-CAM)
A JAX research toolkit for building, editing, and visualizing neural networks.
Stanford NLP Python library for Representation Finetuning (ReFT)
moDel Agnostic Language for Exploration and eXplanation
XAI - An eXplainability toolbox for machine learning
Interpretability Methods for tf.keras models with Tensorflow 2.x
A collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convol…
Public facing deeplift repo
Stanford NLP Python library for understanding and improving PyTorch models via interventions
👋 Xplique is a Neural Networks Explainability Toolbox
A collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models in Keras.
Locating and editing factual associations in GPT (NeurIPS 2022)
Making decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet
💭 Aspect-Based-Sentiment-Analysis: Transformer & Explainable ML (TensorFlow)
Shapley Interactions and Shapley Values for Machine Learning
Fast SHAP value computation for interpreting tree-based models
Add a description, image, and links to the interpretability topic page so that developers can more easily learn about it.
To associate your repository with the interpretability topic, visit your repo's landing page and select "manage topics."