CAMELTrack: Context-Aware Multi-cue ExpLoitation for Online Multi-Object Tracking
Vladimir Somers, Baptiste Standaert, Victor Joos, Alexandre Alahi, Christophe De Vleeschouwer
CAMELTrack is an Online Multi-Object Tracker that learns to associate detections without hand-crafted heuristics. It combines multiple tracking cues through a lightweight, fully trainable module and achieves state-of-the-art performance while staying modular and fast.
cameltrack_demo.mp4
Online Multi-Object Tracking has been recently dominated by Tracking-by-Detection (TbD) methods, where recent advances rely on increasingly sophisticated heuristics for tracklet representation, feature fusion, and multi-stage matching. The key strength of TbD lies in its modular design, enabling the integration of specialized off-the-shelf models like motion predictors and re-identification. However, the extensive usage of human-crafted rules for temporal associations makes these methods inherently limited in their ability to capture the complex interplay between various tracking cues. In this work, we introduce CAMEL, a novel association module for Context-Aware Multi-Cue ExpLoitation, that learns resilient association strategies directly from data, breaking free from hand-crafted heuristics while maintaining TbD's valuable modularity.
At its core, CAMEL employs two transformer-based modules and relies on a novel Association-Centric Training scheme to effectively model the complex interactions between tracked targets and their various association cues. Unlike End-to-End Detection-by-Tracking approaches, our method remains lightweight and fast to train while being able to leverage external off-the-shelf models. Our proposed online tracking pipeline, CAMELTrack, achieves state-of-the-art performance on multiple tracking benchmarks.
- Cleaning of the code
- Simplified installation and integration into TrackLab
- Public release of the repository
- Release of the SOTA weights
- Release of the paper on ArXiv
- Release of the
tracker_states
used for the training - Release weights of a model trained jointly on multiple datasets (DanceTrack, SportsMOT, MOT17, PoseTrack21)
- Release of the
tracker_states
anddetections
used for the evaluation - Cleaning of the code for the training
CAMELTrack is built on top of TrackLab, a research framework for Multi-Object Tracking.
First git clone this repository :
git clone https://github.com/TrackingLaboratory/CAMELTrack.git
You can then choose to install using either uv or directly using pip (while managing your environment yourself).
- Install uv : https://docs.astral.sh/uv/getting-started/installation/
- Create a new virtual environment with a recent python version (>3.9) :
cd cameltrack
uv venv --python 3.12
Note
To use the virtual environment created by uv,
you need to prefix all commands with uv run
, as shown in the examples below.
Using uv run
will automatically download the dependencies the first time it is run.
- Move into the directory
cd cameltrack
- Create a virtual environment (using by example:
conda
) - Install the dependencies inside the virtual environment :
pip install -e .
Note
The following instructions use the uv installation, but you can just remove uv run
from all commands.
To demonstrate CAMELTrack, a default video will be automatically output during the first run:
uv run tracklab -cn cameltrack
Please make sure to check the official GitHub regularly for updates.
To update this repository to its latest version, run git pull
on the repository or uv run -U tracklab -cn cameltrack
to update the dependencies.
You can use tracklab directly on mp4
videos or image folders.
Or also download the desired datasets MOT17, MOT20,
DanceTrack, SportsMOT,
BEE24, or PoseTrack21 and place them in the data/
directory.
The YOLOX detector weights used in the paper are available from DiffMOT. You can also directly use the detection text files from DiffMOT by placing them in the correct data directories.
We also provide precomputed outputs (Tracker States
) for various datasets in Pickle
format on Hugging Face, so you donβt need to run the models yourself.
TrackLab also offers several ready-to-use models (detectors, pose estimators, reid and other trackers). To see all available configurations and options, run:
uv run tracklab --help
The pre-trained weights used to achieve state-of-the-art results in the paper are listed below. They are automatically downloaded when running CAMELTrack.
Dataset | Appearance | Keypoints | HOTA | Weights |
---|---|---|---|---|
DanceTrack | β | 66.1 | camel_bbox_app_dancetrack.ckpt | |
DanceTrack | β | β | 69.3 | camel_bbox_app_kps_dancetrack.ckpt |
SportsMOT | β | β | 80.3 | camel_bbox_app_kps_sportsmot.ckpt |
MOT17 | β | β | 62.4 | camel_bbox_app_kps_mot17.ckpt |
PoseTrack21 | β | β | 66.0 | camel_bbox_app_kps_posetrack24.ckpt |
BEE24 | 50.3 | camel_bbox_bee24.ckpt |
We also provide (by default) the weights camel_bbox_app_kps_global.ckpt trained jointly on MOT17, DanceTrack, SportsMOT, and PoseTrack21, suitable for testing purposes.
Run the following command to track, for example, on DanceTrack, with the checkpoint obtained from training, or the provided model weights (pretrained weights are downloaded automatically when using the name from the table above) :
uv run tracklab -cn cameltrack dataset=dancetrack dataset.eval_set=test modules.track.checkpoint_path=camel_bbox_app_kps_dancetrack.ckpt
By default, this will create a new directory inside outputs/cameltrack
which will contain a visualization of the
output for each sequence, in addition to the tracking output in MOT format.
You first have to run the complete tracking pipeline (without tracking, with a pre-trained CAMELTrack or with a SORT-based tracker, like oc-sort), on train, validation (and testing) sets for the dataset you want to train, and save the "Tracker States":
uv run tracklab -cn cameltrack dataset=dancetrack dataset.eval_set=train
uv run tracklab -cn cameltrack dataset=dancetrack dataset.eval_set=val
uv run tracklab -cn cameltrack dataset=dancetrack dataset.eval_set=test
By default, they are saved in the states/
directory.
You can also use the Tracker States we provide for the common MOT datasets on huggingface.
Once you have the Tracker States, you can put them in the dataset directory
(in data_dir
, by default ./data/$DATASET
) under the states/
directory, with the following names :
data/
DanceTrack/
train/
val/
test/
states/
train.pklz
val.pklz
test.pklz
Once you have the Tracker States, run the following command to train on a specific dataset (by default, DanceTrack) :
uv run tracklab -cn cameltrack_train dataset=dancetrack
Note
You can always modify the configuration in cameltrack.yaml, and in the other files inside this directory, instead of passing these values in the command line.
For example, to change the dataset for training, you can modify camel.yaml.
By default, this will create a new directory inside outputs/cameltrack_train
, which will contain the checkpoints
to the created models, which can then be used for tracking and evaluation, by setting
the modules.track.checkpoint_path
configuration key in camel.yaml.
To train on a custom dataset, you'll have to integrate it in tracklab, either by using the MOT format, or by implementing a new dataset class. Once that's done, you can modify cameltrack.yaml, to point to the new dataset.
This is an overview of CAMELTrack's online pipeline, which uses the tracking-by-detection approach.
If you use this repository for your research or wish to refer to our contributions, please use the following BibTeX entries:
@misc{somers2025cameltrackcontextawaremulticueexploitation,
title={CAMELTrack: Context-Aware Multi-cue ExpLoitation for Online Multi-Object Tracking},
author={Vladimir Somers and Baptiste Standaert and Victor Joos and Alexandre Alahi and Christophe De Vleeschouwer},
year={2025},
eprint={2505.01257},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2505.01257},
}
@misc{Joos2024Tracklab,
title = {{TrackLab}},
author = {Joos, Victor and Somers, Vladimir and Standaert, Baptiste},
journal = {GitHub repository},
year = {2024},
howpublished = {\url{https://github.com/TrackingLaboratory/tracklab}}
}