Code for our ICML 2025 paper : "MindAligner: Explicit Brain Functional Alignment for Cross-Subject Brain Visual Decoding with Limited Data"
โจ Welcome to give a star! โจ
Use our setup.sh for environment preparation:
bash setup.sh
In our experiments, we use NSD for both training and evaluation.
-
Agree to the Natural Scenes Dataset's Terms and Conditions and fill out the NSD Data Access form.
-
Download the processed dataset from here and unzip them to the
./datasetfolder. -
Run the following command to automatically obtain similar image pairs used by our MindAligner:
cd sim_dataset python process_dataset.pyThe preprocessed features will be automatically generated in the
./sim_dataset/v2subj1257folder after running the script above. -
Please download the pretrained decoding model from here to the directory
./decoding_model. We only usefinal_multisubject_subj0{args.n_subj}/last.pth, so you only need to download the relevant checkpoints. Here,subj_id={1,2,5,7}.
Our models are all trained with single NVIDIA V100 GPU.
python train.py --n_subj 1 --k_subj 2 --num_sessions 1
To test with our pretrained models, please download the weights from here (huggingface) into ./ckpts folder.
To test the MindAligner and generate results:
python recon.py --n_subj 1 --k_subj 2
All files used for evaluation will be stored to .evals/1->2. The generated images can be found in .evals/out_plot.
To obtain final enhanced restuls:
python enhance.py --n_subj 1 --k_subj 2
All files used for evaluation will be stored to .evals/1->2.
A huge thank you to the following contributor for her outstanding work on the code! ๐๐โจ
|
Zhouheng Yao
๐ญ๐ฝ๐ฑ
|
@article{dai2025mindaligner,
title={MindAligner: Explicit Brain Functional Alignment for Cross-Subject Visual Decoding from Limited fMRI Data},
author={Dai, Yuqin and Yao, Zhouheng and Song, Chunfeng and Zheng, Qihao and Mai, Weijian and Peng, Kunyu and Lu, Shuai and Ouyang, Wanli and Yang, Jian and Wu, Jiamin},
journal={arXiv preprint arXiv:2502.05034},
year={2025}
}

