Skip to content
/ LATR Public

[ICCV2023 Oral] LATR: 3D Lane Detection from Monocular Images with Transformer

License

Notifications You must be signed in to change notification settings

JMoonr/LATR

Repository files navigation


LATR: 3D Lane Detection from Monocular Images with Transformer

This is the official PyTorch implementation of LATR: 3D Lane Detection from Monocular Images with Transformer.

fig2

News

Environments

To set up the required packages, please refer to the installation guide.

Data

Please follow data preparation to download dataset.

Pretrained Models

Note that the performance of pretrained model is higher than our paper due to code refactoration and optimization. All models are uploaded to google drive.

Dataset Pretrained Metrics md5
OpenLane-1000 Google Drive F1=0.6297 d8ecb900c34fd23a9e7af840aff00843
OpenLane-1000 (Lite version) Google Drive F1=0.6212 918de41d0d31dbfbecff3001c49dc296
ONCE Google Drive F1=0.8125 65a6958c162e3c7be0960bceb3f54650
Apollo-balance Google Drive F1=0.9697 551967e8654a8a522bdb0756d74dd1a2
Apollo-rare Google Drive F1=0.9641 184cfff1d3097a9009011f79f4594138
Apollo-visual Google Drive F1=0.9611 cec4aa567c264c84808f3c32f5aace82

Evaluation

You can download the pretrained models to ./pretrained_models directory and refer to the eval guide for evaluation.

Train

Please follow the steps in training to train the model.

Benchmark

OpenLane

Models F1 Accuracy X error
near | far
Z-error
near | far
3DLaneNet 44.1 - 0.479 | 0.572 0.367 | 0.443
GenLaneNet 32.3 - 0.593 | 0.494 0.140 | 0.195
Cond-IPM 36.3 - 0.563 | 1.080 0.421 | 0.892
PersFormer 50.5 89.5 0.319 | 0.325 0.112 | 0.141
CurveFormer 50.5 - 0.340 | 0.772 0.207 | 0.651
PersFormer-Res50 53.0 89.2 0.321 | 0.303 0.085 | 0.118
LATR-Lite 61.5 91.9 0.225 | 0.249 0.073 | 0.106
LATR 61.9 92.0 0.219 | 0.259 0.075 | 0.104

Apollo

Plaes kindly refer to our paper for the performance on other scenes.

Scene Models F1 AP X error
near | far
Z error
near | far
Balanced Scene 3DLaneNet 86.4 89.3 0.068 | 0.477 0.015 | 0.202
GenLaneNet 88.1 90.1 0.061 | 0.496 0.012 | 0.214
CLGo 91.9 94.2 0.061 | 0.361 0.029 | 0.250
PersFormer 92.9 - 0.054 | 0.356 0.010 | 0.234
GP 91.9 93.8 0.049 | 0.387 0.008 | 0.213
CurveFormer 95.8 97.3 0.078 | 0.326 0.018 | 0.219
LATR-Lite 96.5 97.8 0.035 | 0.283 0.012 | 0.209
LATR 96.8 97.9 0.022 | 0.253 0.007 | 0.202

ONCE

Method F1 Precision(%) Recall(%) CD error(m)
3DLaneNet 44.73 61.46 35.16 0.127
GenLaneNet 45.59 63.95 35.42 0.121
SALAD 64.07 75.90 55.42 0.098
PersFormer 72.07 77.82 67.11 0.086
LATR 80.59 86.12 75.73 0.052

Acknowledgment

This library is inspired by OpenLane, GenLaneNet, mmdetection3d, SparseInst, ONCE and many other related works, we thank them for sharing the code and datasets.

Citation

If you find LATR is useful for your research, please consider citing the paper:

@article{luo2023latr,
  title={LATR: 3D Lane Detection from Monocular Images with Transformer},
  author={Luo, Yueru and Zheng, Chaoda and Yan, Xu and Kun, Tang and Zheng, Chao and Cui, Shuguang and Li, Zhen},
  journal={arXiv preprint arXiv:2308.04583},
  year={2023}
}

About

[ICCV2023 Oral] LATR: 3D Lane Detection from Monocular Images with Transformer

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages