Skip to content

A Tool for Predicting and Visualising Local Brain Activity

Notifications You must be signed in to change notification settings

Hmamouche/BrainPredict

Repository files navigation

A tool for predicting and visualizing the fMRI brain activity during bidirectional conversations of type human-human or human-machine. The current version of the executable of the interface "BrainPredict" is compiled in Linux, nevertheless, the code source is provided on Qt with C++17 for compilation on other systems. While the prediction module is implemented in Python3, and can be executed from terminal or using the interface. The conversation's language required by the current version are only french and english. This project is performed in the context of the PhysSocial project https://amidex.hypotheses.org/2051/

Requirements

  • Python>=3.6
  • Openface (https://github.com/TadasBaltrusaitis/OpenFace) is required to compute facial features from videos.
  • SPPAS (http://www.sppas.org/) is required for automatic annotation and segmentation of the speech (a copy is included in the code source of the prediction module).
  • The FFmpeg multimedia framework:
      sudo apt-get install ffmpeg
  • Python packages:
      pip install -r requirements.txt
  • Spacy models:
        python -m spacy download fr_core_news_sm
        python -m spacy download en_core_web_sm

Demo: using Qt Creator Interface

The working directory must be specified and must contain an Inputs folder containing speech, eyetracking, and video folders. First, install the following requirements:

  sudo apt-get install qtmultimedia5-dev libqt5multimediawidgets5
    libqt5multimedia5-plugins libqt5multimedia5

Then run the executable after setting the execution rights:

  chmod +x BrainPredict
  sh BrainPredict sh

Demo: using the command line

  • To run a demo, we need a video file (of the interlocutor), and the audios of both the participant and the interlocutor, and an eyetracking file of the participant.

  • A example is provided in the folder "Demo". To run the example:

# Generate time series
python src/generate_time_series.py -rg 3 4 7 8 9 10 21 -in demo -ofp 'openface path'

# Make predictions
python src/predict.py -rg 3 4 7 8 9 10 21 -in demo -t r

# Generate time series video from the obtained predictions
python src/animation.py -in demo

# Using visbrain to visualize the prediction in the brain
python src/visualization.py -in demo
  • The required arguments:
--regions REGIONS [REGIONS ...], -rg REGIONS [REGIONS ...]
                     Numbers of brain areas to predict (see
                     brain_areas.tsv)
--type TYPE, -t TYPE  conversation type (human or robot)

--pred_module_path PRED_MODULE_PATH, -pmp PRED_MODULE_PATH
                     path of the prediction module
--openface_path OPENFACE_PATH, -ofp OPENFACE_PATH
                     path of Openface
--input_dir INPUT_DIR, -in INPUT_DIR
                     path of input directory

Citation

@inproceedings{hmamouche2020brainpredict,
  title={Brainpredict: a tool for predicting and visualising local brain activity},
  author={Hmamouche, Youssef and Prevot, Laurent and Ochs, Magalie and Chaminade, Thierry},
  booktitle={Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
  pages={11--16},
  year={2020}
}

About

A Tool for Predicting and Visualising Local Brain Activity

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages