In this work, we have 3 contributions:
- STream-based lAtency-awaRe Evaluation (STARE): A new tracker performance evaluation framework. STARE can reveal tracker's realistic performance by simulating real-world scenarios where downstream applications will ceaselessly request the tracker for current object position without waiting for any inference latency.
- ESOT500: A new dataset for event-based VOT, featuring time-aligned and high-frequency (500Hz) annotations. ESOT500 is designed to support STARE’s stringent real-time criteria.
- Two straightforward yet effective tracker performace enhancement methods: Predictive Tracking & Adaptive Sampling Strategy.
Please refer to the paper for more details.
We present ESOT500, a new dataset for event-based VOT, featuring time-aligned and high-frequency annotations, designed to support STARE’s stringent real-time criteria.
Dataset | #Videos (train/val) | #Annotations | Modality | Frequency | Time-aligned Annotation | Non-rigid & Outdoor |
---|---|---|---|---|---|---|
EED | 5 | 199 | Gray, Event | 23 | ✓ | ✗ |
EV-IMO | 3/3 | 76.8K | Gray, Event | 200 | ✓ | ✗ |
FE240hz | 71/25 | 1132K | Gray, Event | 240 | ✗ | ✗ |
VisEvent | 377/172 | 371K | RGB, Event | ~25 | ✗ | ✗ |
COESOT | 827/527 | 478K | RGB, Event | ~25 | ✗ | ✓ |
ESOT500 | 146/56 | 1219K | RGB, Event | 500 | ✓ | ✓ |
|-- ESOT500
|-- aedat4
| |-- sequence_name1.aedat4
| |-- sequence_name2.aedat4
| : :
|
|-- anno_t
| |-- sequence_name1.txt
| |-- sequence_name2.txt
| : :
|
|-- test.txt
|-- train.txt
|-- test_additional.txt
|-- train_additional.txt
|-- test_challenging.txt
-
Download ESOT500 from our [Hugging Face] datasets repository.
-
The aedat4 directory contains the raw event data (event stream and corresponding RGB frames), the DV and dv-python is recommended for visualization and processing in python respectively.
-
You can find the metadata file at
data/esot500_metadata.json
, or download it from our dataset page in [Hugging face].
The pre-slicing process is only for the traditional frame-based latency-free evaluation.
python PATH_TO\STARE\lib\event_utils\event_stream_pre_slice.py DIR_PATH_TO_AEDAT4_FILES DIR_PATH_WHERE_TO_SAVE_THE_RESULTS FPS MS
the arguments FPS
and MS
should follow the chart bellow, as shown in the Table. 2 of the paper:
500/2 | 250/2 | 20/2 | 500/50 | 250/50 | 20/50 | 500/100 | 250/100 | 20/100 | 500/150 | 250/150 | 20/150 |
---|
Please refer to the paper for more details.
The key advantages of the proposed stream-based latency-aware evaluation are three-fold:
- A unified evaluation regardless of the adopted event representations;
- Dynamic process depending on time rather than frame-sequential;
- Comprehensive evaluation of trackers in terms of latency and accuracy;
Different from frame sequence, event streams are asynchronous data flows. As shown below, the major difference between stream-based evaluation and frame-based streaming perception is that there is input at any time instead of at certain moments.
Tracker performance in STARE across varying output speeds. The x1 denotes the tracker’s actual output speed on our hardware, while other multipliers on the row axis represent the forced multiplication of tracker output speed in our simulated runs. As the output speed decreases, there is a corresponding decline in tracker performance.
Comparison of the frame-based offline evaluation results (-F) and STARE results (-S) for
six representative trackers. A general tracker performance decline from offline to online and a unimodal distribution pattern of tracker performance
across the temporal axis can be observed.
The code is based on the PyTracking and other similar frameworks.
-
Trackers under PyTracking:
1. Go to the working directory of pytracking.
cd lib/pytracking
2. Create a virtual environment.
conda creat -n STARE conda activate STARE
3. Install required libraries following PyTracking. (Please refer to lib/pytracking/INSTALL.md for detailed installation and configuration.)
pip/conda install ...
4. Preprare the dataset.
ln -s /PATH/TO/ESOT500 ../data/EventSOT500
5. Set environment for pytracking.
python -c "from pytracking.evaluation.environment import create_default_local_file; create_default_local_file()" python -c "from ltr.admin.environment import create_default_local_file; create_default_local_file()"
6. Modify the dataset path in generated environment setting files.
- for training:
ltr/admin/local.py
- for testing:
pytracking/evaluation/local.py
7. Run frame-based evaluation. (Experiment settings are in folder
pytracking/experiments
andpytracking/stream_settings
)python pytracking/run_experiment.py myexperiments esot500_offline
8. Run stream-based evaluation. (Experiment settings are in folder
pytracking/experiments
andpytracking/stream_settings
.)python pytracking/run_experiment_streaming.py exp_streaming streaming_34 python eval/streaming_eval_v3.py exp_streaming streaming_34
The instructions given are for real-time testing on your own hardware. If you want to reproduce the results in our paper, please refer to
pytracking/stream_settings/s14
.
9. The results are by default in the folders
pytracking/output/tracking_results
andpytracking/output/tracking_results_rt_final
. You can change the paths by modifying thelocal.py
files.
10. To evaluate the results, use
pytracking/analysis/stream_eval.ipynb
. You can also refer to it to write the analysis scripts of your own style.
Note: For tracker enhancement, please see the follow-up section.
- for training:
-
Trackers under other frameworks:
These trackers use a similar framework to PyTracking, but are not fully integrated into it. Here we take OSTrack and pred_OSTrack as examples to illustrate the usage, including that of the enhancement.
1. Go to the working directory.
cd lib/sotas/[OSTrack or pred_OSTrack]
2. Activate the virtual environment.
conda activate STARE
3. Install the missing libraries.
pip/conda install ...
In fact, if you have PyTracking installed, you can directly find and install the missing packages according to the error by running the subsequent scripts.
4. Set environment for the tracker.
python -c "from lib.test.evaluation.environment import create_default_local_file; create_default_local_file()" python -c "from lib.train.admin.environment import create_default_local_file; create_default_local_file()"
5. Modify the dataset path in generated environment setting files.
- for training:
lib/train/admin/local.py
- for testing:
lib/test/evaluation/local.py
6. Run frame-based evaluation.
python tracking/test.py ostrack baseline --dataset_name esot_500_2
Note:
- This doesn't work for pred_OSTrack.
- The available
dataset_name
can refer to the experiment results listed in our paper.
7. Run stream-based evaluation without predictive module.
python tracking/test_streaming.py ostrack esot500_baseline s14 --dataset_name esot500s [--use_aas] python ../../pytracking/eval/streaming_eval_v3.py --experiment_module exp_streaming --experiment_name streaming_sotas_ostrack_std
Note:
--use_aas
option is currently only available to OSTrack and pred_OSTrack.- You can refer to
streaming_sotas_ostrack_std
to add test module of your own style at../../pytracking/pytracking/experiments/exp_streaming.py
.
8. Run stream-based evaluation with predictive module.
python tracking/test_streaming.py ostrack pred_esot500_4step s14 --dataset_name esot500s --pred_next 1 [--use_aas] python ../../pytracking/eval/streaming_predspeed.py
Note:
--pred_next 1
option is currently only available to pred_OSTrack.- You can change the relevant parameters in
streaming_predspeed.py
to make it fit your own style.
9. The results are by default in the folders
pytracking/output/tracking_results
andpytracking/output/tracking_results_rt_final
. You can change the paths by modifyinglocal.py
andstreaming_predspeed.py
separately.
10. Evaluate the results.
python tracking/analysis_results_pred.py
You can also refer to it to write the analysis scripts of your own style.
- for training:
If you encounter any issues while using our code or dataset, please feel free to contact us.
- The released code is under GPL-3.0 license following the PyTracking.
- The released dataset is under CC-BY 4.0 license.
- The benchmark is built on top of the great PyTracking library.
- Thanks for the great works including Stark, MixFormer, OSTrack and Event-tracking.