This code demonstrates using CFT_BP_ASL and CFT_GA to fine-tune models trained with Assume Negative (AN) (i.e., Negative mode) on the MS-COCO dataset.
-
Download the MS-COCO 2014 dataset.
-
Put the training image folder (
train2014
), validation image folder (val2014
), and annotation folder (annotations
) under the same folder (eg.,COCO
) like this:-----COCO |----train2014 |----val2014 |----annotations
-
Run
pip install -r requirements.py
to install the necessary packages. -
Config the dataset location in
config.py
. -
Config the known label proportion in
config.py
. -
Run
python train.py
. This will train a classification model with AN and save the trained model inoutput/train/best.pth
. -
Run
CFT_prepare.py
. This will generate and store (feature vector z, label y) pairs of the dataset tooutput/CFT/cache
for preparation of CFT. This generation can dramatically speed up CFT. -
Run
CFT_optimize.py
. This will respectively use CFT_BP_ASL and CFT_GA to fine-tune the train model. The parameters of the fine-tuned classification layer will be saved inoutput/CFT
. -
Run
validate.py
to see the classification performances of the trained model, the trained model after CFT_BP_ASL, and the trained model after CFT_GA. The result is saved inoutput/valid/logs
.
See this repo: maxium0526/cft-chexpert.
Part of the codes in this repository are from: