Replication package for the KNOSYS paper titled "An Objective Metric for Explainable AI: How and Why to Estimate the Degree of Explainability".
-
Updated
Jan 10, 2024 - Python
Replication package for the KNOSYS paper titled "An Objective Metric for Explainable AI: How and Why to Estimate the Degree of Explainability".
Open and extensible benchmark for XAI methods
Semantic Meaningfulness: Evaluating counterfactual approaches for real world plausibility
Repository for ReVel framework to Measure Local-Linear Explanationsfor Black-Box Models
ConsisXAI is an implementation of a technique to evaluate global machine learning explainability (XAI) methods based on feature subset consistency
Code for evaluating saliency maps with classification metrics.
Research on AutoML and Explainability.
This repository is the code basis for the paper titled "Balancing Privacy and Explainability in Federated Learning"
Scripts and trained models from our paper: M. Ntrougkas, V. Mezaris, I. Patras, "P-TAME: Explain Any Image Classifier with Trained Perturbations", IEEE Open Journal of Signal Processing, 2025. DOI:10.1109/OJSP.2025.3568756.
A framework for evaluating explainable AI (XAI) methods in drug discovery using multiple machine learning architectures. This repository implements three distinct model architectures (CNN, Random Forest, and RGCN) and provides a hierarchical four-tier evaluation framework for assessing the quality and reliability of their explanations.
Add a description, image, and links to the xai-evaluation topic page so that developers can more easily learn about it.
To associate your repository with the xai-evaluation topic, visit your repo's landing page and select "manage topics."