Click to Login

Macroeconomic Data Forecasts

Introduction
Frequency Matching
Seasonality Concerns
Included Models
Sensitivity Analysis
Backtesting

Introduction

Capitalytics collects several data series as part of its service. Due to the requirements that banks with total consolidated assets of more than $10 billion for executed and report the results of stress tests required under the Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010 (see, e.g., the text of the signed law and the FDIC press release regarding the Economic Scenarios for 2017 Stress Testing), Capitalytics offers forecasts for the following variables to support banks in using their investment in government-mandated stress tests as part of their business and profitability planning.

  1. Real GDP growth
  2. Nominal GDP growth
  3. Real disposable income growth
  4. Nominal disposable income growth
  5. Unemployment rate
  6. CPI inflation rate
  7. 1-month Treasury yield
  8. 3-month Treasury yield
  9. 6-month Treasury yield
  10. 1-year Treasury yield
  11. 3-year Treasury yield
  12. 5-year Treasury yield
  13. 7-year Treasury yield
  14. 10-year Treasury yield
  15. 20-year Treasury yield
  16. 30-year Treasury yield
  17. BBB corporate yield
  18. Mortgage rate
  19. Prime rate
  20. US Average Retail Gasoline Price ($/gal; all grades, all formulations)
  21. S&P 500 Stock Price Index
  22. US Federal Reserve Overnight Loan Rate
  23. Moody’s AAA Rate
  24. Moody’s BAA Rate
  25. Dow Jones Total Stock Market Index
  26. House Price Index
  27. Commercial Real Estate Price Index
  28. Market Volatility Index (VIX)
  29. Euro Area Real GDP Growth
  30. Euro Area Bilateral Dollar Exchange Rate (USD/Euro)
  31. Japan Real GDP Growth
  32. Japan Bilateral Dollar Exchange Rate (Yen/USD)
  33. UK Real GDP Growth
  34. UK Bilateral Dollar Exchange Rate (USD/Pound)

These forecasted values are intended to provide an expectation of the future performance of the given variables, and are intended as inputs to internally generated financial models. As has been reported as best practices in contemporary literature (see, e.g., "The Combination of Forecasts" by J. M. Bates and C. W. J. Granger, 1969), Capitalytics actually uses multiple forecasting techniques to fit past data (with parameters for each model chosen so as to minimize residual values), and then aggregates the results of these values models to generate forecasts for future values.

Frequency Matching

All data samples are processed on a quarterly basis based on end-of-quarter values. Where appropriate quarterly values are not advertised, they are calculated using the most granular advertised values available; missing/non-workdays’ values are populated by duplicating the most recent appropriate value (for example, December 25’s value for certain metrics would be assumed to be the same as the most immediate preceding workday, e.g., December 22nd, 23rd, or 24th; then, all values from October, November, and December would be averaged to generate a 4th quarter value).

Seasonality Concerns

Seasonal adjustment is the process of estimating and removing movement in a time series caused by regular seasonal variation in activity, e.g., an increase in air travel during summer months. Seasonal movement makes it difficult to see underlying changes in the data. Monthly shifts in data as well as short and long-term trends can be best seen through seasonally-adjusted data.

X13-SEATS-ARIMA is the industry standard tool for seasonal adjustment, particularly for official statistics published by national statistics offices. It gives a choice of the SEATS algorithm (Bank of Spain) and X11 (Statistics Canada and US Census Bureau). X13-SEATS-ARIMA combines these into a single application, available for download from the US Census Bureau. The X13-SEATS-ARIMA tool automatically handles outliers, level shifts, transformations, and moving holidays as necessary.

All of the series listed above are publicly available, and are provided having been adjusted for seasonality if appropriate. However, Capitalytics is capable of using the X13-SEATS-ARIMA tool to adjust data series if and when it is necessary.

Included Models

Capitalytics uses a hybrid model for forecasting future values of these data series. Our model is composed of four separate forecasting algorithms, the results of which are linearly combined based on a set of dynamically determined weights. The algorithms used, and the process for combining their results, are given below.

  1. State Space Models with Exponential Smoothing

    Exponential smoothing methods have been around since the 1950s, and are still very popular forecasting methods for relevant applications. (See, e.g., "Forecasting Seasonals and Trends by Exponentially Weighted Averages" by Charles E. Holt, 1957) Having been advanced to the point of building several similar and interrelated techniques, a framework for different model types has been presented as part of the contemporary literature. Specifically, ETS forecasting models accommodate several different types of models with varying trend , seasonality , and error components. A series' trend is its long-term growth rate, its seasonality is its scheme of repetition, and its error is the random, non-reproducible component.

    State Space Models use unobserved auxiliary variables to store relevant information about past values of a time series, in order to aid in forecasting future values. Based on the structure of the state variables, particularly as they relate to a time series' trend, seasonality, and error components, one of a small number of model types may be able to be easily fit to the series. Similarly, a well prescribed solution for a forecast that applies to the time series may be able to be used. Notationally, we track the form of the error, trend, and seasonality components (hence the ETS designation for this class of models); these forms are generally classified as Non-existant (denoted "N"), Additive (with or without dampening a constant; denoted "A" or "Ad", respectively), or Multiplicative (with or with a dampening constant; denoted "M" or "Md", respectively). A time series that appears to have no trend component, an additive seasonality component, and a multiplicative error component may be denoted as "ETS(M,A,N)". In some cases, the error term may be removed from the discussion by bounding its form (e.g., as ~NID(0, s^2) or IID) and, in those cases, only the trend and seasonality components are acknowledged.

    One of the models that Capitalytics uses in its calculations is the "theta model" of Assimakopoulos and Nikolopoulos (2000). This forecast has been shown to be equivalent to a simple exponential smoothing model with a fixed, finite drift (an ETS(A,N) method). (See Forecasting with Exponential Smoothing by Rob J. Hyndman, Anne B. Koehler, J. Keith Ord, and Ralph D. Snyder, 2008.) The series' seasonality is quantified, and, if significant, a multiplicative decomposition is used to adjust the series prior to (and then reversed after) analyzing the series. Several techniques can be used to compute optimal values of the parameters for this type of model very efficiently, with a goal of minimizing the mean squared residual (error), mean absolute residual, or other metrics over the time series (less the number of values required for the smoothing function). This technique is extremely fast and robust, providing a reasonable forecasted metric in all but the most degenerate cases of data.

    In addition to the "theta model", Capitalytics uses a much more complex algorithm to determine the most ideal state space model structure for a time series, in which the error, trend, and series components may be structured using any of the forms given above. Each case is examined with an optimal calculation of appropriate parameters. (It should be noted that multiplicative trend methods tend to produce poor forecasts per http://otexts.org/fpp2/sec-7-6-Taxonomy.html.). A log-likelihood indicator (similar to Akaike’s Information Criterion) is used for selection of the ideal model from the options (see http://otexts.org/fpp2/Regr-SelectingPredictors.html#Regr-SelectingPredictors).

  2. ARIMA models

    Exponential smoothing methods are useful for making forecasts, and make no assumptions about the correlations between successive values of the time series. However, if you want to make prediction intervals for forecasts made using exponential smoothing methods, the prediction intervals require that the forecast errors are uncorrelated and are normally distributed with mean zero and constant variance.

    While exponential smoothing methods do not make any assumptions about correlations between successive values of the time series, in some cases you can make a better predictive model by taking correlations in the data into account. Autoregressive Integrated Moving Average (ARIMA) models include an explicit statistical model for the irregular component of a time series, that allows for non-zero autocorrelations in the irregular component.

    1. Least-squares error fitting models
    2. Holt-Winters models
    3. ARIMA models
      1. ARIMA model based on actual values
      2. ARIMA models based on incremental differences
      3. Holt-Winters/ARIMA hybrid model
      4. ARIMA/ETS hybrid model
    4. Model Matching
    When fitting an ARIMA model to a set of time series data, the following procedure provides a useful general approach.
    1. Plot the data. Identify any unusual observations.
    2. If necessary, transform the data (using a Box-Cox transformation) to stabilize the variance.
    3. If the data are non-stationary: take first differences of the data until the data are stationary.
    4. Examine the ACF/PACF: Is an AR(pp) or MA(qq) model appropriate?
    5. Try your chosen model(s), and use the AICc to search for a better model.
    6. Check the residuals from your chosen model by plotting the ACF of the residuals, and doing a portmanteau test of the residuals. If they do not look like white noise, try a modified model.
    7. Once the residuals look like white noise, calculate forecasts.
    The automated algorithm only takes care of steps 3–5. So even if you use it, you will still need to take care of the other steps yourself. See https://www.otexts.org/sites/default/files/resize/fpp/images/Figure-8-10-570x752.png

Sensitivity Analysis

Backtesting