We have designed and developed an interactive system that allows users to experiment with deep learning image classifiers and explore their robustness and sensitivity. Selected areas of an image can be removed in real time with classical computer vision inpainting algorithms, allowing users to ask a variety of "what if" questions by experimentally modifying images and seeing how the deep learning model reacts. The system also computes class activation maps for any selected class, which highlight the important semantic regions of an image the model uses for classification. The system runs fully in browser using Tensorflow.js, React, and SqueezeNet. An advanced inpainting version is also available using a server running the PatchMatch algorithm from the GIMP Resynthesizer plugin.
The baseball player is correctly classified even when the ball, glove, and base are removed
The dock is incorrectly classified when the masts of a sailboat are removed
Download or clone this repository:
git clone https://github.com/poloclub/interactive-classification.git
Within the cloned repo, install the required packages with yarn:
yarn
To run, type:
yarn start
The following steps are needed to set up PatchMatch inpainting:
- Clone the Resynthesizer repository and follow the instructions for building the project
- Find the
libresynthesizer.a
shared library in thelib
folder and copy it to theinpaint
folder in this repository - Run
gcc resynth.c -L. -lresynthesizer -lm -lglib-2.0 -o prog
(may have to install glib2.0 first) to generate the prog executable - You can now run
python3 inpaint_server.py
and PatchMatch will be used as the inpainting algorithm.
MIT License. See LICENSE.md
.
For questions or support open an issue.