Skip to content

tntek/CLCR

Repository files navigation

Official implementation for [CLCR](CLCR: Model Adaptation via Credible Local Context Representation)

Code (pytorch) for ['CLCR: Model Adaptation via Credible Local Context Representation'] on Office-31, Office-Home, VisDA-C. This paper has been accepted by CAAI Transactions on Intelligence Technology (CTIT).

Framework

Datasets and Prerequisites

You need to download the Office-31, Office-Home, VisDA-C dataset, modify the path of images in each '.txt' under the folder './data_clcr/'.

The experiments are conducted on one GPU (NVIDIA RTX TITAN).

  • python == 3.7.3
  • pytorch ==1.6.0
  • torchvision == 0.7.0

Training and evaluation

  1. Training Source modle. All the settings for different scenarios refers to ./run_source.sh.

  2. Then adapting source model to target domain, with only the unlabeled target data. All the settings for different methods and scenarios refers to ./run_targetr.sh.

Results

The results of CLCR is display under the folder './results/'.

Acknowledgement

DeepCluster(ECCV 2018)'s work.

SHOT (ICML 2020, also source-free)'s work.

Contact

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published