Generate Diverse Counterfactual Explanations for any machine learning model.
-
Updated
Jul 13, 2025 - Python
Generate Diverse Counterfactual Explanations for any machine learning model.
moDel Agnostic Language for Exploration and eXplanation (JMLR 2018; JMLR 2021)
[CONTRIBUTORS WELCOME] Generalized Additive Models in Python
Shapley Interactions and Shapley Values for Machine Learning
A Python library for Interpretable Machine Learning in Text Classification using the SS3 model, with easy-to-use visualization tools for Explainable AI ![]()
PyTorch code for ETSformer: Exponential Smoothing Transformers for Time-series Forecasting
Concept Bottleneck Models, ICML 2020
An Open-Source Library for the interpretability of time series classifiers
TalkToModel gives anyone with the powers of XAI through natural language conversations 💬!
Material related to my book Intuitive Machine Learning. Some of this material is also featured in my new book Synthetic Data and Generative AI.
The code of NeurIPS 2021 paper "Scalable Rule-Based Representation Learning for Interpretable Classification" and TPAMI paper "Learning Interpretable Rules for Scalable Data Representation and Classification"
ProtoTrees: Neural Prototype Trees for Interpretable Fine-grained Image Recognition, published at CVPR2021
Implementation of Layerwise Relevance Propagation for heatmapping "deep" layers
Implementation of the paper "Shapley Explanation Networks"
Model Agnostic Counterfactual Explanations
Information Bottlenecks for Attribution
PIP-Net: Patch-based Intuitive Prototypes Network for Interpretable Image Classification (CVPR 2023)
Diffusion attentive attribution maps for interpreting Stable Diffusion for image-to-image attention.
Add a description, image, and links to the interpretable-machine-learning topic page so that developers can more easily learn about it.
To associate your repository with the interpretable-machine-learning topic, visit your repo's landing page and select "manage topics."