This is the codebase for the CVPR 2023 paper Adversarial Counterfactual Visual Explanations.
Through anaconda, install our environment:
conda env create -f environment.yaml
conda activate ace
To use ACE, you must download the pretrained DDPM models. Please extract them to a folder of choice /path/to/models
. We provide links and instructions to download all models.
Download Link:
- CelebA Models
- CelebaA HQ
- BDDOIA/100k
- ImageNet:
- Diffusion Model: download the
256x256 diffusion (not class conditional)
DDPM model through theopenai/guided-diffusion
repo. Link. - Classifier: we used the pretrained models given by PyTorch.
- Diffusion Model: download the
- Evaluation Models
To generate counterfactual explanations, use the main.py
python script. We added a commentary to every possible flag for you to know what they do. Nevertheless, in the script
folder, we provided several scripts to generate all counterfactual explanations using our proposed method: ACE.
We follow the same folder ordering as DiME. Please see all details in DiME's repository. Similarly, we took advantage of their multi-chunk processing -- more info in DiME's repo.
To reduce the GPU burden, we implemented a checkpoint strategy to enable counterfactual production on a reduced GPU setup. --attack_joint_checkpoint True
sets this modality on. Please check this repo for a nice explanation and visualization. The flag --attack_checkpoint_backward_steps n
uses n
DDPM iterations before computing the backward gradients. It is 100% recommended to use a higher --attack_checkpoint_backward_steps
value and a batch size of 1 than --attack_checkpoint_backward_steps 1
and a larger batch size!!!
When you finished processing all counterfactual explanations, we store the counterfactual and the pre-explanation. You can easily re-process the pre-explanations using the postprocessing.py
python script.
We provided a generic code base to evaluate counterfactual explanation methods. All evaluation script filenames begin with compute
. Please look at their arguments on each individual script.
Notes:
- All evaluations are based on the file organization created with our file system.
compute_FID
andcompute_sFID
are bash scripts. The first argument is theoutput_path
as in the main script. The second one is the experiment name. We implemented a third one, a temporal folder where everything will be computed - useful for testing multiple models at the same time.
Is you found our code useful, please cite our work:
@inproceedings{Jeanneret_2023_CVPR,
author = {Jeanneret, Guillaume and Simon, Lo\"ic and Fr\'ed\'eric Jurie},
title = {Adversarial Counterfactual Visual Explanations},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2023}
}
We based our repository on our previous work Diffusion Models for Counterfactual Explanations.