Code for <Domain Adaptive Video Semantic Segmentation via Cross-Domain Moving Object Mixing> in WACV 2023
[paper] [demo]
- Conda enviroment
conda create -n CMOM python=3.6
conda activate CMOM
conda install -c menpo opencv
pip install kornia
pip install importlib-metadata
- Clone ADVENT
git clone https://github.com/valeoai/ADVENT.git
pip install -e ./ADVENT
- Clone the repo
git clone https://github.com/kyusik-cho/CMOM.git
pip install -e ./CMOM
Download Cityscapes, VIPER, SYNTHIA-Seq.
Ensure the file structure is as follows.
- Cityscapes-Seq
<data_dir>/Cityscapes/
<data_dir>/Cityscapes/leftImg8bit_sequence
<data_dir>/Cityscapes/gtFine
- VIPER
<data_dir>/Viper/
<data_dir>/Viper/train/img
<data_dir>/Viper/train/cls
- SYNTHIA-Seq
<data_dir>/SynthiaSeq/
<data_dir>/SynthiaSeq/SEQS-04-DAWN
We followed DA-VSN to get optical flow.
Please follow their policy to get estimated optical flow.
Download the pseudo labels here and put them under <root_dir>/cmom
.
Or run make_pseudolabel.py
with DA-VSN pretrained model.
Download the pre-trained models and put them under <root_dir>/pretrained_models
.
When training a model, you can start with either DA-VSN pretrained model or DeepLab ImageNet pretrained models.
python test.py --cfg configs/cmom_viper2city_pretrained.yml
python test.py --cfg configs/cmom_syn2city_pretrained.yml
python train.py --cfg configs/cmom_viper2city.yml --tensorboard
python train.py --cfg configs/cmom_syn2city.yml --tensorboard
python test.py --cfg configs/cmom_viper2city.yml
python test.py --cfg configs/cmom_syn2city.yml
This code is based on the following open-source projects.