This project extends the DAM4SAM tracker to support multi-object tracking. The original DAM4SAM introduced a distractor-aware memory mechanism that significantly improves tracking robustness against distractors. This extension builds upon that foundation to enable tracking of multiple objects simultaneously.
- Multi-Object Tracking: Track multiple objects simultaneously in video sequences
- Interactive Box Selection: User-friendly interface for selecting multiple bounding boxes
- Distractor-Aware Memory: Inherits the robust distractor handling from the original DAM4SAM
- Video Output: Generate annotated videos showing tracking results
For installation instructions and detailed setup, please refer to the original DAM4SAM repository.
Run the multi-object tracking demo on a sequence of frames:
CUDA_VISIBLE_DEVICES=0 python run_bbox_example.py --input_dir <frames-dir> --output_dir <output-dir> --ext <frame-ext> --make_video TrueParameters:
<frames-dir>: Path to directory containing video frames<frame-ext>: Frame file extension (default: jpg)<output-dir>: Output directory for saving tracking results (optional)
Usage:
- The script will display the first frame
- Click and drag to draw bounding boxes around objects you want to track
- Press ENTER when done selecting boxes
- The tracker will process all frames and save results
This work is built upon the excellent research and implementation of:
- DAM4SAM by Jovana Videnović, Alan Lukežič, and Matej Kristan - A Distractor-Aware Memory for Visual Object Tracking with SAM2
- SAM 2 by Meta FAIR - Segment Anything Model 2
If you use this multi-object tracking extension, please cite the original DAM4SAM paper:
@InProceedings{dam4sam,
author = {Videnovic, Jovana and Lukezic, Alan and Kristan, Matej},
title = {A Distractor-Aware Memory for Visual Object Tracking with {SAM2}},
booktitle = {Comp. Vis. Patt. Recognition},
year = {2025}
}This project follows the same license as the original DAM4SAM repository. Please refer to the original repository for licensing details.