Progressive Pretext Task Learning for Human Trajectory Prediction
Xiaotong Lin
Tianming Liang
Jianhuang Lai
Jian-Fang Hu*
Sun Yat-sen University
ECCV 2024
- Python == 3.8.3
- PyTorch == 1.7.0
Install the dependencies from the requirements.txt
:
pip install -r requirements.txt
We provide a complete set of pre-trained models including:
- Well-pretrained model on Task-I:
- The model after warm-up:
- Well-pretrained model on Task-II:
- Well-trained model on Task-III:
You can download the pre-trained models and the pre-processed data from here.
After the prepartion work, the whole project should has the following structure:
./MemoNet
├── README.md
├── data # datasets
│ ├── ETH_UCY
│ │ ├── social_eth_test_256_0_50.pickle
│ │ ├── social_eth_train_256_0_50.pickle
│ │ └── ...
│ ├── social_sdd_test_4096_0_100.pickle
│ └── social_sdd_train_512_0_100.pickle
├── models # core models
│ ├── layer_utils.py
│ ├── model.py
│ └── ...
├── requirements.txt
├── run.sh
├── sddloader.py # sdd dataloader
├── test_PPT.py # testing code
├── train_PPT.py # training code
├── trainer # core operations to train the model
│ ├── evaluations.py
│ ├── test_final_trajectory.py
│ └── trainer_AIO.py
└── training # saved models/memory banks
└── Pretrained_Models
├── SDD
│ ├── Model_ST
│ ├── Model_Des_warm
│ ├── Model_LT
│ └── Model_ALL
└── ETH_UCY
├── model_eth_res
├── model_hotel_res
└── ...
Important configurations.
--mode
: verify the current training mode,--model_Pretrain
: pretrained model path,--info
: path name to store the models,--gpu
: number of devices to run the codes,
Training commands.
bash run.sh
To get the reported results, following
python test_PPT.py --reproduce True --info reproduce --gpu 0
And the code will output:
./training/Pretrained_Models/SDD/model_ALL
Loaded data!
Test FDE_48s: 10.650254249572754 ------ Test ADE: 7.032739639282227
----------------------------------------------------------------------------------------------------
We sincerely thank the authors of MemoNet for providing the source code from their CVPR 2022 publication. We also appreciate the pre-processed data from PECNet. These resources have been invaluable to our work, and we are immensely grateful for their support.
If you find our work helpful, please cite:
@inproceedings{
lin2024progressive,
title={Progressive Pretext Task Learning for Human Trajectory Prediction},
author={Lin, Xiaotong and Liang, Tianming and Lai, Jianhuang and Hu, Jian-Fang},
booktitle={ECCV},
year={2024},
}