Univariate Time Series Overview

Why do we care?

Understanding and predicting time series can help us make better decisions. Better decisions can lead to more profit, or less losses, or less wasting of natural resources, and other benefits.

How can we predict the unpredictable?

By breaking down the problem into parts we know how to deal with.

We know how to deal with independent and identically distributed values:

  1. Draw a histogram and get summary statistics.
  2. Fit a distribution that makes sense.
  3. Assume that the distribution will continue to make sense into the future.

Thus, if we can get to i.i.d. values, or something close enough, then we can make predictions.

We predict that the random, patternless noise is going to keep going according to the same distribution as it has in the past.

But what about the patterns?

We can only predict patternless noise, so we:

  1. get rid of the patterns to get to the noise,
  2. predict the noise,
  3. put back the patterns in the reverse order of how we got rid of them.

Models for stationary series

The closest thing to patternless noise (also called white noise) but that isn’t patternless, is stationarity. A stationary time series is one that has patterns but those patterns are at least flat in the long run - they don’t keep going up or keep going down, they instead keep coming back to the same middle ground. Also important is that the patterns don’t change over time.

If we have a stationary time series then we can fit a model for a stationary time series and get the residuals. If we fit the right model then the residuals will be white noise.

The most popular model for a stationary series is the Autoregressive Moving Average (ARMA) model. We will focus on this model the most.

What about the variance?

All of the methods above assume that the variance of the series is constant. If the variance is not constant around the patterns then we must find a way to make it constant around the patterns first.


  1. Make sure the variance is constant.
  2. Get the series stationary.
  3. Fit a model for the stationary series.
  4. Fit a distribution to the residuals.
  5. Use your distribution to make intervals for the future uncertainty.
  6. Add back your stationary model.
  7. Reverse what you did to get the series stationary, and reverse what you did to get the variance constant.
  8. Report your predictions and intervals on the original scale.

(If you did nothing at a step then there’s nothing to reverse.)

Sean van der Merwe
Coordinator of UFS Statistical Consultation Unit