Skip to content

Latest commit

 

History

History
153 lines (96 loc) · 5.25 KB

README.md

File metadata and controls

153 lines (96 loc) · 5.25 KB

Timer⏱️

This repo provides official code and checkpoints for Timer: Transformers for Time Series Analysis at Scale.

Updates

🚩 News (2024.3) Checkpoint pre-trained on UTSD-4G is available.

🚩 News (2024.3) We provide the downstream fine-tune code for the forecasting task.

Introduction

Time Series Transformer (Timer) includes GPT-style Transformers pre-trained on multi-domain time series as Large Time Series Model (LTSM).

We curate large-scale datasets comprised of 1B time points, proposing a unified training strategy with single-series sequence, and presenting Timer with the decoder-only architecture. As a LTSM, Timer is enabled with:

  • Generalization ability that one model fits all domains.

  • Task generality that one model copes with various tasks.

  • Scalability that the performance increases with the scale of pre-training.

Usage

  1. Install Pytorch and necessary dependencies.
pip install -r requirements.txt
  1. The datasets can be obtained from Google Drive or Tsinghua Cloud.

  2. Download the pre-trained checkpoints:

  3. Train and evaluate the model. We provide the above tasks under the folder ./scripts/.

# forecasting
bash ./scripts/forecast/ECL.sh

# TODO: segement-level imputation
bash ./scripts/imputation/ECL.sh

# TODO: anomaly detection on the fly
bash ./scripts/anomaly_detection/UCR.sh

Showcases

Forecasting with data scarcity (limited downstream training samples)

Segment-level imputation with few-shot samples

On-the-fly anomaly detection on UCR Anomaly Archive

Approach

Large Dataset

We curate Unified Time Series Dataset (UTSD) that includes 7 domains with up to 1 billion time points with hierarchical capacities to facilitate research of scalability and domain transfer.

Pre-training Strategy

To facilitate pre-training on extensive time series, we convert heterogeneous series into single-series sequence (S3), reserving the patterns of series variations with the unified context length, towards the well-established tokenization like natural language.

300

Model Architecture

With the substantial progress of decode-only large language models and evaluation of other backbone alternatives, we adopt the GPT-style Transformer with autoregressive generation towards LTSMs.

300

Unified Generative Task Formulation

Timer is applicable on various tasks, which is realized in the unified generative approach.

Task Formulation
Time Series Forecasting Next Token Prediction
Imputation (Segment-level) Denoising Autoencoding
Anomaly Detection (Predictive) Next Token Prediction

Performance

We compare Timer with state-of-the-art approaches and present the pre-training benefit on data-scarce scenarios, known as the few-shot cpability of large models.

300

Scalability

By increasing the parameters and pre-training scale, Timer achieves notable performance improvement: 0.231 $\to$ 0.138 (−40.3%), surpassing the previous state-of-the-art deep forecasters.

300

Flexible Sequence Length

The decoder-only architecture provides additional flexibility to accommodate series of different lookback and forecast lengths.

300

Future work

We are preparing to release the composition of datasets (UTSD), larger checkpoints, and code for pre-training. Please stay tuned for the update!

Citation

If you find this repo helpful, please cite our paper.

@article{liu2024timer,
  title={Timer: Transformers for Time Series Analysis at Scale},
  author={Liu, Yong and Zhang, Haoran and Li, Chenyu and Huang, Xiangdong and Wang, Jianmin and Long, Mingsheng},
  journal={arXiv preprint arXiv:2402.02368},
  year={2024}
}

Contact

If you have any questions or want to use the code, feel free to contact: