This is the official PyTorch implementation for our paper:
Normalized Convolution Upsampling for Refined Optical Flow Estimation
Abdelrahman Eldesokey, and Michael Felsberg
Published at VISAPP 2021, Online Conference
[ ArXiv ]
@conference{visapp21,
author={Abdelrahman Eldesokey. and Michael Felsberg.},
title={Normalized Convolution Upsampling for Refined Optical Flow Estimation},
booktitle={Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5 VISAPP: VISAPP,},
year={2021},
pages={742-752},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010343707420752},
isbn={978-989-758-488-6},
}
The code has been tested with PyTorch 1.6 and Cuda 10.1.
conda create --name raft
conda activate raft
conda install pytorch=1.6.0 torchvision=0.7.0 cudatoolkit=10.1 -c pytorch
conda install matplotlib
conda install tensorboard
conda install scipy
conda install opencv
Pretrained models can be downloaded from Google Drive and should be placed in models
directory.
To evaluate/train RAFT, you will need to download the required datasets.
By default datasets.py
will search for the datasets in these locations. You can create symbolic links to wherever the datasets were downloaded in the datasets
folder
├── datasets
├── Sintel
├── test
├── training
├── KITTI
├── testing
├── training
├── devkit
├── FlyingChairs_release
├── data
├── FlyingThings3D
├── frames_cleanpass
├── frames_finalpass
├── optical_flow
You can evaluate a trained model by running any of the bash scripts eval_raft_nc_sintel.sh
and eval_raft_nc_kitti.sh
./eval_raft_nc_sintel.sh
We used the following training schedule in our paper (2 GPUs). Training logs will be written to the runs
which can be visualized using tensorboard.
Available training scripts are train_raft_nc_sintel.sh
,train_raft_nc_kitti.sh
, and train_raft_nc_things.sh
./train_raft_nc_sintel.sh
The code has been borrowed and modified from the official code for RAFT: Recurrent All Pairs Field Transforms for Optical Flow