R2Gen is from the implementation of Generating Radiology Reports via Memory-driven Transformer at EMNLP-2020.
KG implementation comes from RGMG and VSEGCN. Different training method is adopted.
The work here incorporates the two implementation.
Run bash run_iu_xray.sh
to train a model on the IU X-Ray data.
link = '1-q0e7oDDIn419KlMTmGTOZWoMqJTUbpV'
downloaded = drive.CreateFile({'id':link})
downloaded.GetContentFile('gcnclassifier_v2_ones3_t401v2t3_lr1e-6_e80.pth')
OR
link = '10J5VwEmyOM9-I_YHyzpJaALRN36o1No4'
downloaded = drive.CreateFile({'id':link})
downloaded.GetContentFile('gcnclassifier_v2_ones3_t0v1t2_lr1e-6_e80.pth') \
changes to pretrained gcn download:
link = '1Cd_J2-tFVvRE6dMBfyJsYKW_1HPWtlHx'
downloaded = drive.CreateFile({'id':link})
downloaded.GetContentFile('iuxray_gcnclassifier_v1_ones3_t0v1t2_lr1e-6_23050521_e180.pth') \
changes to 'run_iu_xray.sh':
--pretrained models/iuxray_gcnclassifier_v1_ones3_t0v1t2_lr1e-6_23050521_e180.pth
--kg_option 'vsegcn' \
Run bash run_mimic_cxr.sh
to train a model on the MIMIC-CXR data.
If 2 images of MIMIC-CXR is inputted, change in run_mimic_cxr.sh:
--d_vf 2048
--dataset_name 'mimic_cxr_2images'
OLD(not used anymore):
link = '1-b6zxemYj6yoTG6rxjMW11lyZiuE0kTV'
downloaded = drive.CreateFile({'id':link})
downloaded.GetContentFile('mimic_gcnclassifier_v1_ones3_t0v1t2_lr1e-6_e10.pth') \
NEW(pls use this one)
link = '1_5DhLPDq7bSOgLWLPO7BM-gUySqpiVCK'
downloaded = drive.CreateFile({'id':link})
downloaded.GetContentFile('mimic_gcnclassifier_v1_ones3_t0v1t2_lr1e-6_24052021_e10.pth') \
default, just nn.Embedding
--pretrained_LM 'glove-mimic'
If use BioBert as pretrained Language Models: pip install pytorch-pretrained-bert
need to change in run_iu_xray.sh:
--d_model 768
--rm_d_model 768
--pretrained_LM 'biobert'
If use BioBert as pretrained Language Models:
pip install pytorch-pretrained-bert
pip install transformers
pip install sentencepiece \
need to change in run_iu_xray.sh:
--d_model 128
--rm_d_model 128
--pretrained_LM 'bioalbert'
If you use or extend our work, please cite our paper at EMNLP-2020.
@inproceedings{chen-emnlp-2020-r2gen,
title = "Generating Radiology Reports via Memory-driven Transformer",
author = "Chen, Zhihong and
Song, Yan and
Chang, Tsung-Hui and
Wan, Xiang",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2020",
}
torch==1.5.1
torchvision==0.6.1
opencv-python==4.4.0.42
You can download the models we trained for each dataset from here.
We use two datasets (IU X-Ray and MIMIC-CXR) in our paper.
For IU X-Ray
, you can download the dataset from here and then put the files in data/iu_xray
.
For MIMIC-CXR
, you can download the dataset from here and then put the files in data/mimic_cxr
.