Skip to content

wanghao9610/TMANet

Repository files navigation

Temporal Memory Attention for Video Semantic Segmentation, [paper]

PWC PWC

Introduction

We propose a Temporal Memory Attention Network (TMANet) to adaptively integrate the long-range temporal relations over the video sequence based on the self-attention mechanism without exhaustive optical flow prediction. Our method achieves new state-of-the-art performances on two challenging video semantic segmentation datasets, particularly 80.3% mIoU on Cityscapes and 76.5% mIoU on CamVid with ResNet-50. (Accepted by ICIP2021)

If this codebase is helpful for you, please consider give me a star ⭐ 😊.

image

Updates

2021/1: TMANet training and evaluation code released.

2021/6: Update README.md:

  • adding some Camvid dataset download links;
  • update 'camvid_video_process.py' script.

Usage

  • Install mmseg

    • Please refer to mmsegmentation to get installation guide.
    • This repository is based on mmseg-0.7.0 and pytorch 1.6.0.
  • Clone the repository

    git clone https://github.com/wanghao9610/TMANet.git
    cd TMANet
    pip install -e .
  • Prepare the datasets

    • Download Cityscapes dataset and Camvid dataset.

    • For Camvid dataset, we need to extract frames from downloaded videos according to the following steps:

      • Download the raw video from here, in which I provide a google drive link to download.
      • Put the downloaded raw video(e.g. 0016E5.MXF, 0006R0.MXF, 0005VD.MXF, 01TP_extract.avi) to ./data/camvid/raw .
      • Download the extracted images and labels from here and split.txt file from here, untar the tar.gz file to ./data/camvid , and we will get two subdirs "./data/camvid/images" (stores the images with annotations), and "./data/camvid/labels" (stores the ground truth for semantic segmentation). Reference the following shell command:
        cd TMANet
        cd ./data/camvid
        wget https://drive.google.com/file/d/1FcVdteDSx0iJfQYX2bxov0w_j-6J7plz/view?usp=sharing
        # or first download on your PC then upload to your server.
        tar -xf camvid.tar.gz 
      • Generate image_sequence dir frame by frame from the raw videos. Reference the following shell command:
        cd TMANet
        python tools/convert_datasets/camvid_video_process.py
    • For Cityscapes dataset, we need to request the download link of 'leftImg8bit_sequence_trainvaltest.zip' from Cityscapes dataset official webpage.

    • The converted/downloaded datasets store on ./data/camvid and ./data/cityscapes path.

      File structure of video semantic segmentation dataset is as followed.

      ├── data                                              ├── data                              
      │   ├── cityscapes                                    │   ├── camvid                        
      │   │   ├── gtFine                                    │   │   ├── images                    
      │   │   │   ├── train                                 │   │   │   ├── xxx{img_suffix}       
      │   │   │   │   ├── xxx{img_suffix}                   │   │   │   ├── yyy{img_suffix}       
      │   │   │   │   ├── yyy{img_suffix}                   │   │   │   ├── zzz{img_suffix}       
      │   │   │   │   ├── zzz{img_suffix}                   │   │   ├── annotations               
      │   │   │   ├── val                                   │   │   │   ├── train.txt             
      │   │   ├── leftImg8bit                               │   │   │   ├── val.txt               
      │   │   │   ├── train                                 │   │   │   ├── test.txt              
      │   │   │   │   ├── xxx{seg_map_suffix}               │   │   ├── labels                    
      │   │   │   │   ├── yyy{seg_map_suffix}               │   │   │   ├── xxx{seg_map_suffix}   
      │   │   │   │   ├── zzz{seg_map_suffix}               │   │   │   ├── yyy{seg_map_suffix}   
      │   │   │   ├── val                                   │   │   │   ├── zzz{seg_map_suffix}   
      │   │   ├── leftImg8bit_sequence                      │   │   ├── image_sequence            
      │   │   │   ├── train                                 │   │   │   ├── xxx{sequence_suffix}  
      │   │   │   │   ├── xxx{sequence_suffix}              │   │   │   ├── yyy{sequence_suffix}  
      │   │   │   │   ├── yyy{sequence_suffix}              │   │   │   ├── zzz{sequence_suffix}  
      │   │   │   │   ├── zzz{sequence_suffix}              
      │   │   │   ├── val                                   
      
  • Evaluation

    • Download the trained models for Cityscapes and Camvid. And put them on ./work_dirs/{config_file}
    • Run the following command(on Cityscapes):
    sh eval.sh configs/video/cityscapes/tmanet_r50-d8_769x769_80k_cityscapes_video.py
  • Training

    • Please download the pretrained ResNet-50 model, and put it on ./init_models .
    • Run the following command(on Cityscapes):
    sh train.sh configs/video/cityscapes/tmanet_r50-d8_769x769_80k_cityscapes_video.py

    Note: the above evaluation and training shell commands execute on Cityscapes, if you want to execute evaluation or training on Camvid, please replace the config file on the shell command with the config file of Camvid.

Citation

If you find TMANet is useful in your research, please consider citing:

@inproceedings{wang2021temporal,
title={Temporal memory attention for video semantic segmentation},
author={Wang, Hao and Wang, Weining and Liu, Jing},
booktitle={2021 IEEE International Conference on Image Processing (ICIP)},
pages={2254--2258},
year={2021},
organization={IEEE}
}

Acknowledgement

Thanks mmsegmentation contribution to the community!

Releases

No releases published

Packages

No packages published