Skip to content
/ LEA Public

Pytorch implementation of 'Learning Latent Embedding Alignment Model for fMRI Decoding and Encoding' In TMLR, 2024

License

Notifications You must be signed in to change notification settings

naiq/LEA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

[TMLR2024] LEA: Learning Latent Embedding Alignment Model for fMRI Decoding and Encoding

Xuelin Qian, Yikai Wang, Xinwei Sun, Yanwei Fu, Xiangyang Xue, Jianfeng Feng

Overview

We introduce LEA, a unified framework that addresses both fMRI decoding and encoding. We train two latent spaces to represent and reconstruct fMRI signals and visual images, respectively. By aligning these two latent spaces, we seamlessly transform between the fMRI signal and visual stimuli. LEA can recover visual stimuli from fMRI signals and predict brain activity from images. LEA outperforms existing methods on multiple fMRI decoding and encoding benchmarks.

Environment Setups

Create and activate conda environment named lea from our requirements.yaml

conda env create -f requirements.yaml
conda activate lea

Data Preparation

Please download the GOD and BOLD5000 datasets from MinD-Vis and put them in the /dataset folder as organized below.

/dataset
┣ 📂 Kamitani
┃   ┣ 📂 npz
┃   ┃   ┗ 📜 sbj_1.npz
┃   ┃   ┗ 📜 sbj_2.npz
┃   ┃   ┗ 📜 sbj_3.npz
┃   ┃   ┗ 📜 sbj_4.npz
┃   ┃   ┗ 📜 sbj_5.npz
┃   ┃   ┗ 📜 images_256.npz
┃   ┃   ┗ 📜 imagenet_class_index.json
┃   ┃   ┗ 📜 imagenet_training_label.csv
┃   ┃   ┗ 📜 imagenet_testing_label.csv

┣ 📂 BOLD5000
┃   ┣ 📂 BOLD5000_GLMsingle_ROI_betas
┃   ┃   ┣ 📂 py
┃   ┣ 📂 BOLD5000_Stimuli
┃   ┃   ┣ 📂 Image_Labels
┃   ┃   ┣ 📂 Scene_Stimuli
┃   ┃   ┣ 📂 Stimuli_Presentation_Lists

Chcekpoints Download

The pre-trained weighs on the Human Connectome Projects (HCP) dataset can be downloaded from MinD-Vis repository. After downloading, put them into the /pretrains folder.

For the checkpoints of fMRI reconstruction and image reconstruction, please download them from Google Drive (coming soon) and place them into the /checkpoints folder.

All checkpoints should be organized as follows,

/checkpoints
┣ 📂 BOLD_sbj1
┃   ┗ 📜 checkpoint.pth
┣ 📂 BOLD_sbj2
┃   ┗ 📜 checkpoint.pth
┣ ...
┣ 📂 GOD_sbj1
┃   ┗ 📜 checkpoint.pth
┣ 📂 GOD_sbj2
┃   ┗ 📜 checkpoint.pth
┣ ...
┣ 📂 ImageDecoder_MaskGIT
┃   ┗ 📜 checkpoint_GOD.pth
┃   ┗ 📜 checkpoint_BOLD.pth

/pretrains
┣ 📂 BOLD
┃   ┗ 📜 fmri_encoder.pth
┣ 📂 GOD
┃   ┗ 📜 fmri_encoder.pth
┣ 📜 MaskGIT_ImageNet256_checkpoint.pth
┣ 📜 MaskGIT_Trans_ImageNet256_checkpoint.pth

Inference

1. GOD Dataset

Run python LEA_GOD.py to reconstruct images from fMRI signals and predict fMRI signals from visual stimuli.

To evaluate different individuals, some hyper-parameters and paths in TWO files need to be manually modified. In /LEA_GOD.py,

367 |    ckpt_encoder = 'PATH_to_fmri_encoder_ckpt' 
368 |    ckpt_decoder = 'PATH_to_image_decoder_ckpt'
369 |    args.cfg_file = [
370 |        'PATH_to_fmri_encoder_cfg',
371 |        'PATH_to_image_decoder_cfg'
372 |    ]

and in /configs/fMRI_TransAE_GOD.yaml,

19 |  num_voxel: 4466 # sbj_1: 4466, sbj_2: 4404, sbj_3: 4643, sbj_4: 4133, sbj_5: 4370
20 |  roi_patch: [1004, 801, 476, 588, 431, 249, 157, 760] # sbj1: [1004, 801, 476, 588, 431, 249, 157, 760]
21 |     # sbj2: [757, 727, 603, 416, 372, 124, 576, 829]
22 |     # sbj3: [872, 826, 605, 630, 770, 262, 306, 372]
23 |     # sbj4: [719, 652, 676, 597, 494, 236, 308, 451]
24 |     # sbj5: [659, 676, 649, 729, 661, 301, 246, 449]
25 |
26 |  sub: ['sbj_1'] # ['sbj_1', 'sbj_2', 'sbj_3', 'sbj_4', 'sbj_5']

The outputs will be stored in the /results folder of the path where the fMRI reconstruction model is located

2. BOLD5000 Dataset

Run python LEA_BOLD.py for inference, which is similar to the operations on the GOD dataset.

Visualizations

Qualitative results of image-fMRI-image reconstruction on CSI-1 of BOLD5000 dataset.

Acknowledgments

Our fMRI autoencoder implementation is based on the MAE and MinD-Vis. Our image autoencoder implementation is based on MaskGIT. We extend our gratitude to the authors for their excellent work and for publicly sharing codes!

Citation

@article{qian2024lea,
  title={LEA: Learning Latent Embedding Alignment Model for fMRI Decoding and Encoding},
  author={Xuelin Qian and Yikai Wang and Xinwei Sun and Yanwei Fu and Xiangyang Xue and Jianfeng Feng},
  journal={Transactions on Machine Learning Research},
  year={2024},
  url={https://openreview.net/forum?id=SUMtDJqicd},  
}

@article{qian2023joint,
  title={Joint fMRI Decoding and Encoding with Latent Embedding Alignment},
  author={Qian, Xuelin and Wang, Yikai and Fu, Yanwei and Sun, Xinwei and Xue, Xiangyang and Feng, Jianfeng},
  journal={arXiv preprint arXiv:2303.14730},
  year={2023}
}

Contact

Any questions or discussions are welcome!

Xuelin Qian (xuelinq92@gmail.com)

About

Pytorch implementation of 'Learning Latent Embedding Alignment Model for fMRI Decoding and Encoding' In TMLR, 2024

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages