Official implementation for [CLCR](CLCR: Model Adaptation via Credible Local Context Representation)
Code (pytorch) for ['CLCR: Model Adaptation via Credible Local Context Representation'] on Office-31, Office-Home, VisDA-C. This paper has been accepted by CAAI Transactions on Intelligence Technology (CTIT).
You need to download the Office-31, Office-Home, VisDA-C dataset, modify the path of images in each '.txt' under the folder './data_clcr/'.
The experiments are conducted on one GPU (NVIDIA RTX TITAN).
- python == 3.7.3
- pytorch ==1.6.0
- torchvision == 0.7.0
-
Training Source modle. All the settings for different scenarios refers to ./run_source.sh.
-
Then adapting source model to target domain, with only the unlabeled target data. All the settings for different methods and scenarios refers to ./run_targetr.sh.
The results of CLCR is display under the folder './results/'.
DeepCluster(ECCV 2018)'s work.