Skip to content

Implementation of our Video Synchronization paper in TensorFlow (ICMLA 2017)

License

Notifications You must be signed in to change notification settings

cgtuebingen/LearningToSynchronizeVideos

Repository files navigation

Learning Robust Video Synchronization without Annotations

Patrick Wieschollek, Ido Freeman, Hendrik P.A. Lensch (ICMLA 2017)

arxiv_sync

Aligning video sequences is a fundamental yet still unsolved component for a broad range of applications in computer graphics and vision. Most classical image processing methods cannot be directly applied to related video problems due to the high amount of underlying data and their limit to small changes in appearance. We present a scalable and robust method for computing a non-linear temporal video alignment. The approach autonomously manages its training data for learning a meaningful representation in an iterative procedure each time increasing its own knowledge. It leverages on the nature of the videos themselves to remove the need for manually created labels. While previous alignment methods similarly consider weather conditions, season and illumination, our approach is able to align videos from data recorded months apart.

[PDF] [VIDEO]

Update:

  • 31.01.2019: THe p2_dist operation can be now compiled via CMake