Scaling-laws for large time-series models

Discovering power-law scaling relationships in large time-series transformer models, analogous to those found in language models.

Scaling laws for large language models (LLMs) have provided useful guidance in training ever larger models for predictable performance gains. Time series forecasting shares a similar sequential structure to language, and is amenable to large-scale transformer architectures. Here we show that foundational decoder-only time series transformer models exhibit analogous scaling-behavior to LLMs, with architectural details (aspect ratio and number of heads) having a minimal effect over broad ranges. We assemble a large corpus of heterogenous time series data on which to train and establish for the first time power-law scaling with parameter count, dataset size and training compute that spans five orders of magnitude.