Skip to content

Computing human-like reaction times from stable recurrent vision models

License

Notifications You must be signed in to change notification settings

serre-lab/rnn_rts

Repository files navigation



Computing a human-like reaction time metric from stable recurrent vision models

Lore Goetschalckx*, Lakshmi N. Govindarajan*, Alekh K. Ashok, Aarit Ahuja, David L. Sheinberg, & Thomas Serre

python pytorch RNN RTS License: MIT

AboutDatasetsComputing ξSU mapsTrainingCitationLicense


We find a way to quantify the internal recurrent dynamics of a large-scale vision model by devising a metric ξ based on subjective logic theory. We use the metric to study temporal human-model alignment on different (and challenging) visual cognition tasks (e.g., incremental grouping task shown here). More in our paper.

About

This repo contains the PyTorch implementation for our framework to train, analyze, and interpret the dynamics of convolutional recurrent neural networks (cRNNs). We include Jupyter notebooks to demonstrate how to compute ξ, our stimulus-computable, task- and model-agnostic metric that can be compared directly against human RT data. Code to train your own models is included as well.

PaperProject page

Datasets

Please refer to the data folder for more information on how to download and use our full datasets for the incremental grouping task ("Coco Dots") and the maze task, if you'd like to use those. Note, however, that the notebooks should run without any additional downloads too. The demonstrations use a mini version of the Coco Dots dataset that's included in this repo.

Computing ξ

As explained in the paper, we model uncertainty (ϵ) explicitly and track its evolution over time in the cRNN. We formulate our RT metric (ξcRNN) as the area under this uncertainty curve. You can find a demo of how to generate these curves and extract ξ in:

uncertainty_curves.ipynb

Spatial uncertainty maps

Specifically for the incremental grouping task, we introduced spatial uncertainty maps to probe a cRNN's visual strategies. For a given outline stimulus, one dot (indicated in white) is kept fixed while the position of the other is varied. Each position in the map has a value corresponding to the ξ for the respective dot configuration (fixed dot + other dot). You can find a demo of how to produce such maps in:

spatial_uncertainty_maps.ipynb.

Training

Below is an example of how to use train.py for the incremental grouping task.

python train.py cocodots_train cocodots_val --batch_size 32 --data_root ./data/coco --name mymodel --model hgru --kernel_size 5 --timesteps 40 --loss_fn EDL --annealing_step 16 --algo rbp --penalty --penalty_gamma 100

Note that this assumes you have downloaded the MS COCO training and valiation images in ./data/coco and downloaded our Coco Dots annotation files in ./data. More details in data . If you'd like to try out the code without downloading more data, you can play around with cocodots_val_mini, which is included here already for demo purposes.

Citation

If you'd like to use any of the code, figures, weights, or data in this repo for your own work, please cite our paper:

@misc{goetschalckx2023computing,
      title={Computing a human-like reaction time metric from stable recurrent vision models}, 
      author={Lore Goetschalckx and Lakshmi Narasimhan Govindarajan and Alekh Karkada Ashok and Aarit Ahuja and David L. Sheinberg and Thomas Serre},
      year={2023},
      eprint={2306.11582},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

License

MIT


About

Computing human-like reaction times from stable recurrent vision models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published