An Analyst’s Toolbox: Illustrating Forecast Risk

An Analyst’s Toolbox: Illustrating Forecast Risk
jvgillette.com • July 15, 2013
by James V. Gillette

No one believes that a forecast will be precisely correct. Forecasting almost always involves predicting a discrete value from a large number of possibilities. If you were asked to forecast the number of light vehicles that will be produced in the U.S. next year, for example, after careful consideration of historical trends and developing a number of assumptions for the economy and auto market specifically, you might come up with a “point estimate.” Let’s say its 14.1 million units.

At the end of the year, light vehicle production might tally to something near 14.1 million, but the odds of exactly 14,100,000 units being made are, you guessed it, something over 1 in 1,000,000. How close the actual was to 14.1 million would result from three factors:

  1. Is vehicle production predictable? (More on this in a separate article.)
  2. Was your analysis properly formulated and executed?
  3. Were you lucky?

One of the ways a forecaster can communicate the variance inherent in the forecast is to supply a measure of the range of values the results might fall within. This most often will be in the form of a probability distribution. Setting aside that the commonly used “normal” distribution is often an inaccurate representation of the observed set of results, it is convenient to assume “normality” as a starting point for analysis making modifications later as needed.

Researchers have found that, more often than not, forecasters demonstrate overconfidence in the precision of their estimate.[i]  That is, when asked to identify a range of possibilities surrounding the forecast, a “base, pessimistic, and optimistic” value, research has shown that the actual occurrence falls outside of the predicted range in the neighborhood of 50% of the time. [ii]

I recently examined thirteen forecasts that provided a base, pessimistic, and optimistic value and found the actual fell outside of the predicted range ten of thirteen times.  That’s a 77% error rate.

The forecast missed the target by a wide margin and it failed to provide the intended user with a credible sense of the variability inherent in the time-series. Examining the statistical characteristics, the forecaster in each case had estimated the spread between the extremes of “pessimistic” and “optimistic” and the forecasted “base” case as far less than a single standard error of estimate of an ordinary least squares function (linear or logarithmic). The base plus or minus one standard error should encompass 68% of the probable occurrences. (This, of course, assumes the probable occurrences are “normal.” If, in fact, they are not and exhibit “fat tails,” then the range of occurrences will be even wider.) Obviously, if the actual was within the predicted range of the thirteen examined forecasts, the pessimistic to optimistic range selected was far too narrow.

Common practice in business is to strive for a range that covers 95% (equivalent to two standard deviations for a normal distribution) of the potential occurrences. Most business practitioners will feel comfortable with a forecast if they feel they understand that only 2.5% of the time, the actual will be below the range (and 2.5% it will be above).

Data: US DOC

Data: US DOC

North American Light Vehicle Production: 1980-2007 Residual Plot - Chart 2

Charts 1 and 2 illustrate how an analyst might go about providing “confidence intervals” around a trend forecast. North American light vehicle production for the years 1980-2007 are plotted in Chart 1 along with an OLS trend line. The calculated linear function is:

Y = 10,766,407 + 225,867X

The coefficient of determination (R2) is 0.69 indicating that 69% of the actually annual values can be explained by the linear function.

Chart 2 plots the residual values; the errors or actual amounts not explained by the linear function. Note that there is a considerable amount of year-to-year variation from the predicted value. Because of the sinusoidal nature of the residuals, a reasonable assumption would be there is cyclicality the linear function is not capturing. OLS software calculates “Standard Error of Estimate (SEE),” which is the standard deviation of the residuals. In this case, the SEE is 1,273,203, which implies that 68% of the time, the actual will fall 1,273,203 either side of the value predicted by the linear function. Two standard errors would be over 2.5 million units, representing 95% of the occurrences (again assuming the time series is “normally” distributed).

North American Light Vehicle Production: Chart 3

Chart 3 plots the trend line with the “confidence intervals” of plus or minus two standard errors. It is clear that from 1980 through 2007 the range of values between the confidence intervals adequately captures the year-over-year variation in North American Light Vehicle Production.

But, have we done enough?

North American Light Vehicle Production: Chart 4

If we were to extend the trend line (e.g. “trend extrapolation”), the forecasted value for 2008 would be (See Chart 4):

17,316,564

whereas, ex post, we know the actual was:

12,641,422

an error of almost 5 million units. The lower confidence interval (remember, only 2.5% of the results are assumed to fall below this level) from the predicted value is:

14,770,158

Still way too high. If we recalibrate and use the 2007 actual (15,101,385) as our base, the lower confidence boundary is:

12,554,979

This is very close to the actual results. An analyst using the SEE to illustrate the potential range of possible results extended from the 2007 base would have provided their client with a reasonable range of possible results.

The analyst should also have been prompted at that point to re-examine her assumption of linearity and, perhaps, the “normality” of the time series. In hindsight, what was to follow was truly a “fat-tailed” experience in this context. In a subsequent article, we will examine the degree to which a “real-world” time series like vehicle production can be forecasted given its complex environment and discuss the relative merits of “dynamic” versus “static” models.


[i] Kahneman, Daniel and Amos Tversky, “Intuitive Prediction: Bias and Corrective Procedures,” Arlington, VA, Office of Naval Research, Technical Report No. PTR-1042-77-6, June 1977

 

[ii] Arkes, Hal R. at http://www.forecastingprinciples.com/content/view/144/7/, accessed 4/27/2011