This is the official repository for the paper "Xplainer: From X-Ray Observations to Explainable Zero-Shot Diagnosis", which was accepted for publication at MICCAI 2023.
We propose a new way of explainability for zero-shot diagnosis prediction in the clinical domain. Instead of directly predicting a diagnosis, we prompt the model to classify the existence of descriptive observations, which a radiologist would look for on an X-Ray scan, and use the descriptor probabilities to estimate the likelihood of a diagnosis, making our model explainable by design. For this we leverage BioVil, a pretrained CLIP model for X-rays and apply contrastive observation-based prompting. We evaluate Xplainer on two chest X-ray datasets, CheXpert and ChestX-ray14, and demonstrate its effectiveness in improving the performance and explainability of zero-shot diagnosis. Authors: Chantal Pellegrini, Matthias Keicher, Ege Özsoy, Petra Jiraskova, Rickmer Braren, Nassir Navab
@article{pellegrini2023xplainer,
title={Xplainer: From X-Ray Observations to Explainable Zero-Shot Diagnosis},
author={Pellegrini, Chantal and Keicher, Matthias and {\"O}zsoy, Ege and Jiraskova, Petra and Braren, Rickmer and Navab, Nassir},
journal={arXiv preprint arXiv:2303.13391},
year={2023}
}
-
Clone this repository
git clone https://github.com/ChantalMP/Xplainer
-
Install requirements:
- use Python 3.7
- install requirements:
conda create -n xplainer_env python=3.7 conda activate xplainer_env pip install hi-ml-multimodal==0.1.2 pip install -r requirements.txt
-
Download data
CheXpert:
- download data from https://stanfordaimi.azurewebsites.net/datasets/23c56a0d-15de-405b-87c8-99c30138950c
- copy 'test_labels.csv' and the image folder 'test' into 'data/chexpert'
ChestXRay14:
- download data from 'https://nihcc.app.box.com/v/ChestXray-NIHCC/folder/36938765345'
- use the script 'preprocess_chestxray14' to download the data
- copy 'images', 'test_list.txt' and 'Data_Entry_2017_v2020.csv' into 'data/chestxray14'
- run
python -m preprocesss_chestxray14
run
python -m inference --dataset chexpert
or
python -m inference --dataset chestxray14
run
python -m demo
This model is intended to be used solely for (I) future research on visual-language processing and (II) reproducibility of the experimental results reported in the reference paper.