XWhy: eXplain Why with a SMILE -- Statistical Model-agnostic Interpretability with Local Explanations
Machine learning is currently undergoing an explosion in capability, popularity, and sophistication. However, one of the major barriers to widespread acceptance of machine learning (ML) is trustworthiness: most ML models operate as black boxes, their inner workings opaque and mysterious, and it can be difficult to trust their conclusions without understanding how those conclusions are reached. Explainability is therefore a key aspect of improving trustworthiness: the ability to better understand, interpret, and anticipate the behaviour of ML models. To this end, we propose a SMILE, a new method that builds on previous approaches by making use of statistical distance measures to improve explainability while remaining applicable to a wide range of input data domains
The SMILE approach has been extended for Point Cloud Explainability. Please check out the examples here.
pip install xwhy
[1] Aslansefat, K., Hashemian, M., Walker, M., Akram, M. N., Sorokos, I., & Papadopoulos, Y. (2023). Explaining black boxes with a SMILE: Statistical Model-agnostic Interpretability with Local Explanations. IEEE Software. DOI: 10.1109/MS.2023.3321282, Arxiv, WorkTribe.
It would be appreciated a citation to our paper as follows if you use X-Why for your research:
@article{aslansefat2023explaining,
title={Explaining black boxes with a SMILE: Statistical Model-agnostic Interpretability with Local Explanations},
author={Aslansefat, Koorosh and Hashemian, Mojgan and Walker, Martin and Akram, Mohammed Naveed and Sorokos, Ioannis and Papadopoulos, Yiannis},
journal={IEEE Software},
year={2023},
publisher={IEEE}
}
This project is supported by the Secure and Safe Multi-Robot Systems (SESAME) H2020 Project under Grant Agreement 101017258.
Post-Doctoral Enrichment Award from the Alan Turing Institute
If you are interested in contributing to this project, please check the contribution guidelines.