This repo provides official code and checkpoints for Timer: Transformers for Time Series Analysis at Scale, a Large Time Series Model for unified time series and tasks.
🚩 News (2024.5) Our paper is accepted by ICML 2024.
🚩 News (2024.4) Online API interface is coming soon, supporting zero-shot forecasting!
🚩 News (2024.4) The pre-training scale has been extended to 15B time points, exhibiting the zero-shot capability.
🚩 News (2024.2) Checkpoint model on UTSD-4G is available.
🚩 News (2024.2) Releasing the fine-tune code for forecasting.
Time Series Transformer (Timer) includes GPT-style Transformers pre-trained on multi-domain time series as Large Time Series Model (LTSM). [Project Page]
We curate large-scale datasets comprised of 1B time points, proposing a unified training strategy with single-series sequence, and presenting Timer with the decoder-only architecture. As a LTSM, Timer is enabled with:
-
Generalization ability that one model fits all domains.
-
Task generality that one model copes with various tasks.
-
Scalability that the performance increases with the scale of pre-training.
Forecasting with data scarcity (limited downstream training samples)
Segment-level imputation with few-shot samples
On-the-fly anomaly detection on UCR Anomaly Archive
We curate Unified Time Series Dataset (UTSD) that includes 7 domains with up to 1 billion time points with hierarchical capacities to facilitate research of scalability and domain transfer.
To facilitate pre-training on extensive time series, we convert heterogeneous series into single-series sequence (S3), reserving the patterns of series variations with the unified context length, towards the well-established tokenization like natural language.
With the substantial progress of decode-only large language models and evaluation of other backbone alternatives, we adopt the GPT-style Transformer with autoregressive generation towards LTSMs.
Timer is applicable on various tasks, which is realized in the unified generative approach.
We compare Timer with state-of-the-art approaches and present the pre-training benefit on data-scarce scenarios, known as the few-shot cpability of large models.
By increasing the parameters and pre-training scale, Timer achieves notable performance improvement: 0.231
The decoder-only architecture provides additional flexibility to accommodate time series of different lookback and forecast lengths.
- Install Pytorch and necessary dependencies.
pip install -r requirements.txt
-
Put the datasets [Google Drive] [Tsinghua Cloud] under the folder
./dataset/
. -
Download the pre-trained checkpoints and put them under the folder
./checkpoints/
.- Timer_67M_UTSD_4G [Google] [Tsinghua]
-
Train and evaluate the model. We provide the above tasks under the folder
./scripts/
.
# forecasting
bash ./scripts/forecast/ECL.sh
# TODO: segement-level imputation
bash ./scripts/imputation/ECL.sh
# TODO: anomaly detection on the fly
bash ./scripts/anomaly_detection/UCR.sh
- Training on custom data: Tutorials are provided in this repo.
We are preparing to provide the online service for zero-shot forecasting (demo). Please stay tuned for the update!
If you find this repo helpful, please cite our paper.
@article{liu2024timer,
title={Timer: Transformers for Time Series Analysis at Scale},
author={Liu, Yong and Zhang, Haoran and Li, Chenyu and Huang, Xiangdong and Wang, Jianmin and Long, Mingsheng},
journal={arXiv preprint arXiv:2402.02368},
year={2024}
}
We appreciate the following GitHub repos a lot for their valuable code and efforts.
- Time-Series-Library (https://github.com/thuml/Time-Series-Library)
- iTransformer (https://github.com/thuml/iTransformer)
If you have any questions or want to use the code, feel free to contact:
- Yong Liu (liuyong21@mails.tsinghua.edu.cn)
- Haoran Zhang (z-hr20@mails.tsinghua.edu.cn)