This is the source code for KG-IQA: Knowledge-Guided Blind Image Quality Assessment with Few Training Samples.
Pytorch: 1.8.1
timm: 0.3.2
CUDA: 10.2
To ensure high speed, save images and lables of each dataset with 'mat' files. Only need to run 'data_preparation_example.py' once for each dataset.
The models pre-trained on KonIQ-10k with 5%, 10%, 25%, 80% samples are released. The dataset are randomly splitted several times during training, and each released model is obtained from the first split (numpy. random. seed(1)). The model file 'my_vision_transformer.py' is modified from open accessed source code of DEIT and TIMM.
The pre-trained models can be downloaded from: Pre-trained models. Please download these files and put them in the same folder of code and then run 'test_example_koniq_npercent.py' to make intra/cross dataset test for models trained on n% samples.
The training code can be available at the 'training' folder.
{
author={Song, Tianshu and Li, Leida and Wu, Jinjian and Yang, Yuzhe and Li, Yaqian and Guo, Yandong and Shi, Guangming},
journal={IEEE Transactions on Multimedia},
title={Knowledge-Guided Blind Image Quality Assessment With Few Training Samples},
year={2023},
volume={25},
pages={8145-8156},
doi={10.1109/TMM.2022.3233244}
}
This repository is released under the Apache 2.0 license.