This is the implementation and dataset for Learning To Reconstruct High Speed and High Dynamic Range Videos From Events, CVPR 2021, by Yunhao Zou, Yinqiang Zheng, Tsuyoshi Takatani and Ying Fu.
This repo will be continuously updated.
In this work, we present a convolutional recurrent neural network which takes a sequence of neighboring event frames to reconstruct high speed HDR videos. To facilitate the process of network learning, we design a novel optical system and collect a real-world dataset with paired high speed HDR videos and event streams.
- We propose a convolutional recurrent neural network for the reconstruction of high speed HDR videos from events. Our architecture carefully considers the alignment and temporal correlation for events.
- To bridge the gap between simulated and real HDR videos, we design an elaborate system to synchronously capture paired high speed HDR video and the corresponding event stream.
- We collect a real-world dataset with paired high speed HDR videos (high-bit) and event streams. Each frame of our HDR videos are merged from two precisely synchronized LDR frames.
If you find this work useful for your research, please cite:
@inproceedings{zou2021learning,
title={Learning To Reconstruct High Speed and High Dynamic Range Videos From Events},
author={Zou, Yunhao and Zheng, Yinqiang and Takatani, Tsuyoshi and Fu, Ying},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={2024--2033},
year={2021}
}
If you have any problems, please feel free to contact me at zouyunhao@bit.edu.cn


