Skip to content

Commit

Permalink
added upsampling module
Browse files Browse the repository at this point in the history
  • Loading branch information
Zach Teed committed Jul 25, 2020
1 parent dc12208 commit a2408ea
Show file tree
Hide file tree
Showing 32 changed files with 23,545 additions and 605 deletions.
109 changes: 44 additions & 65 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,4 @@
# RAFT

**7/22/2020: We have updated our method to predict flow at full resolution leading to improved results on public benchmarks. This repository will be updated to reflect these changes within the next few days.**

This repository contains the source code for our paper:

[RAFT: Recurrent All Pairs Field Transforms for Optical Flow](https://arxiv.org/pdf/2003.12039.pdf)<br/>
Expand All @@ -11,90 +8,72 @@ Zachary Teed and Jia Deng<br/>
<img src="RAFT.png">

## Requirements
Our code was tested using PyTorch 1.3.1 and Python 3. The following additional packages need to be installed

```Shell
pip install Pillow
pip install scipy
pip install opencv-python
```
The code has been tested with PyTorch 1.5.1 and PyTorch Nightly. If you want to train with mixed precision, you will have to install the nightly build.
```Shell
conda create --name raft
conda activate raft
conda install pytorch torchvision cudatoolkit=10.1 -c pytorch-nightly
conda install matplotlib
conda install tensorboard
conda install scipy
conda install opencv
```

## Demos
Pretrained models can be downloaded by running
```Shell
./scripts/download_models.sh
```
or downloaded from [google drive](https://drive.google.com/file/d/10-BYgHqRNPGvmNUWr8razjb1xHu55pyA/view?usp=sharing)

You can run the demos using one of the available models.

You can demo a trained model on a sequence of frames
```Shell
python demo.py --model=models/chairs+things.pth
python demo.py --model=models/raft-things.pth --path=demo-frames
```

or using the small (1M parameter) model

```Shell
python demo.py --model=models/small.pth --small
```
## Required Data
To evaluate/train RAFT, you will need to download the required datasets.
* [FlyingChairs](https://lmb.informatik.uni-freiburg.de/resources/datasets/FlyingChairs.en.html#flyingchairs)
* [FlyingThings3D](https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html)
* [Sintel](http://sintel.is.tue.mpg.de/)
* [KITTI](http://www.cvlibs.net/datasets/kitti/eval_scene_flow.php?benchmark=flow)
* [HD1K](http://hci-benchmark.iwr.uni-heidelberg.de/) (optional)

Running the demos will display the two images and a vizualization of the optical flow estimate. After the images display, press any key to continue.

## Training
To train RAFT, you will need to download the required datasets. The first stage of training requires the [FlyingChairs](https://lmb.informatik.uni-freiburg.de/resources/datasets/FlyingChairs.en.html#flyingchairs) and [FlyingThings3D](https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html) datasets. Finetuning and evaluation require the [Sintel](http://sintel.is.tue.mpg.de/) and [KITTI](http://www.cvlibs.net/datasets/kitti/eval_scene_flow.php?benchmark=flow) datasets. We organize the directory structure as follows. By default `datasets.py` will search for the datasets in these locations
By default `datasets.py` will search for the datasets in these locations. You can create symbolic links to wherever the datasets were downloaded in the `datasets` folder

```Shell
├── datasets
│ ├── Sintel
| | ├── test
| | ├── training
│ ├── KITTI
| | ├── testing
| | ├── training
| | ├── devkit
│ ├── FlyingChairs_release
| | ├── data
│ ├── FlyingThings3D
| | ├── frames_cleanpass
| | ├── frames_finalpass
| | ├── optical_flow
```

We used the following training schedule in our paper (note: we use 2 GPUs for training)

```Shell
python train.py --name=chairs --image_size 368 496 --dataset=chairs --num_steps=100000 --lr=0.0002 --batch_size=6
```

Next, finetune on the FlyingThings dataset

```Shell
python train.py --name=things --image_size 368 768 --dataset=things --num_steps=60000 --lr=0.00005 --batch_size=3 --restore_ckpt=checkpoints/chairs.pth
```

You can perform dataset specific finetuning

### Sintel

```Shell
python train.py --name=sintel_ft --image_size 368 768 --dataset=sintel --num_steps=60000 --lr=0.00005 --batch_size=4 --restore_ckpt=checkpoints/things.pth
├── Sintel
├── test
├── training
├── KITTI
├── testing
├── training
├── devkit
├── FlyingChairs_release
├── data
├── FlyingThings3D
├── frames_cleanpass
├── frames_finalpass
├── optical_flow
```

### KITTI

## Evaluation
You can evaluate a trained model using `evaluate.py`
```Shell
python train.py --name=kitti_ft --image_size 288 896 --dataset=kitti --num_steps=40000 --lr=0.0001 --batch_size=4 --restore_ckpt=checkpoints/things.pth
python evaluate.py --model=models/raft-things.pth --dataset=sintel
```


## Evaluation
You can evaluate a model on Sintel and KITTI by running

## Training
Training code will be made available in the next few days
<!-- We used the following training schedule in our paper (note: we use 2 GPUs for training). Training logs will be written to the `runs` which can be visualized using tensorboard
```Shell
python evaluate.py --model=models/chairs+things.pth
./train_standard.sh
```
or the small model by including the `small` flag

If you have a RTX GPU, training can be accelerated using mixed precision. You can expect similiar results in this setting (1 GPU)
```Shell
python evaluate.py --model=models/small.pth --small
```
./train_mixed.sh
``` -->
Loading

0 comments on commit a2408ea

Please sign in to comment.