BayesianVSLNet - Temporal Video Segmentation with Natural Language using Text-Video Cross Attention and Bayesian Order-priors
- π: Paper with an improved BayesianVSLNet++ version together with checkpoints and pre-extracted video features.
- π₯ 7/15/2024: Code released!
- π 6/15/2024: Poster presentation at EgoVis Workshop during CVPR2024.
- π₯³ 6/10/2024: Challenge report is available on ArXiv!
- π 6/01/2024: BayesianVSLNet wins the Ego4D Step Grounding Challenge CVPR24.
We build our approach BayesianVSLNet: Bayesian temporal-order priors for test time refinement. Our model significantly improves upon traditional models by incorporating a novel Bayesian temporal-order prior during inference, which adjusts for cyclic and repetitive actions within video, enhancing the accuracy of moment predictions.
git clone https://github.com/cplou99/BayesianVSLNet
cd BayesianVSLNet
pip install -r requirements.txt
We use both Omnivore-L, EgoVideo and EgoVLPv2 video features. They should be pre-extracted and located at ./ego4d-goalstep/step-grounding/data/features/.
It is necessary to locate the EgoVLPv2 weights to extract text features ./NaQ/VSLNet_Bayesian/model/EgoVLP_weights.
cd ego4d-goalstep/step_grounding/
bash train_Bayesian.sh experiments/
cd ego4d-goalstep/step_grounding/
bash infer_Bayesian.sh experiments/
The challenge is built over Ego4d-GoalStep dataset and code.
Goal: Given an untrimmed egocentric video, identify the temporal action segment corresponding to a natural language description of the step. Specifically, predict the (start_time, end_time) for a given keystep description.
You will find in the leaderboard π the results in the test set for the best approaches. Our method is currently in the first place ππ₯.
We present qualitative results in a real-world assistive robotics scenario to demonstrate the potential of our approach in enhancing human-robot interaction in practical applications.
@misc{plou2024carlorego4dstep,
title={CARLOR @ Ego4D Step Grounding Challenge: Bayesian temporal-order priors for test time refinement},
author={Carlos Plou and Lorenzo Mur-Labadia and Ruben Martinez-Cantin and Ana C. Murillo},
year={2024},
eprint={2406.09575},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2406.09575},
}