Monday, February 17, 2014

Thoughts on "Factor-Augmented VAR's"

Let's use the standard term, principal-components regression (PCR). It's irrelevant whether it's a "regular" regression or an autoregression, univariate or multivariate. Econometricians have always liked PCR. (I am no exception.) In this "data-rich" age it's more useful than ever, and things like Bernanke and Boivin's factor-augmented vector autoregressions have taken PCR to new heights of popularity.

But PCR has some awkward aspects, well-known in some circles (see, e.g., Hastie and Tibshirani, Elements of Statistical Learning, Chapter 3) but curiously little-known in others.

In particular:

(1) First-step PC extraction is "unsupervised" (in machine-learning jargon). Hence the x-variable linear combinations given by the PC's may differ importantly from the best x-variable linear combinations for predictive purposes. This is unfortunate because second-step PCR typically is used for prediction!

(2) PCR shrinks in rather awkward/extreme directions/amounts. PCR shrinks the excluded PC's completely to 0 (by construction), and moreover, it shrinks the included PC's equally toward 0, regardless of the relative sizes of their associated eigenvalues.

So, what to do?

(1) Wold's partial least squares (PLS) attempts to address issue (1). Recent interesting work, moreover, extends PLS in powerful ways, as with the Kelly-Pruitt three-pass regression filter and its amazing apparent success in predicting aggregate equity returns.

(2) Ridge regression (among others) addresses issue (2). It includes all PC's and shrinks them toward 0 according to the relative sizes of their associated eigenvalues.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.