Skip to main content

Part of the book series: Springer Texts in Statistics ((STS))

  • 208k Accesses

Abstract

High-dimensional data can be challenging to analyze. They are difficult to visualize, need extensive computer resources, and often require special statistical methodology. Fortunately, in many practical applications, high-dimensional data have most of their variation in a lower-dimensional space that can be found using dimension reduction techniques. There are many methods designed for dimension reduction, and in this chapter we will study two closely related techniques, factor analysis and principal components analysis, often called PCA.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 129.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The normalized eigenvalues are determined only up to sign so they could multiplied by − 1 to become \((-0.71,-0.71)\) and (0. 71, −0. 71).

  2. 2.

    As mentioned previously, the eigenvectors are determined only up to a sign reversal, since multiplication by − 1 would not change the spanned space or the norm. Thus, we could instead say the eigenvector has only negative values, but this would not change the interpretation.

  3. 3.

    The graph would, of course, be everywhere increasing if \(\mathbf{o}_{2}\) were multiplied by − 1.

References

  • Connor, G. (1995) The three types of factor models: a comparison of their explanatory power. Financial Analysts Journal. 42–46.

    Google Scholar 

  • Fama, E. F., and French, K. R. (1992) The cross-section of expected stock returns. Journal of Finance, 47, 427–465.

    Article  Google Scholar 

  • Fama, E. F., and French, K. R. (1993) Common risk factors in the returns on stocks and bonds. Journal of Financial Economics, 33, 3–56.

    Article  MATH  Google Scholar 

  • Fama, E. F., and French, K. R. (1995) Size and book-to-market factors in earnings and returns. Journal of Finance, 50, 131–155.

    Article  Google Scholar 

  • Fama, E. F., and French, K. R. (1996) Multifactor explanations of asset pricing anomalies. Journal of Finance, 51, 55–84.

    Article  Google Scholar 

  • Mardia, K. V., Kent, J. T., and Bibby, J. M. (1979) Multivariate Analysis, Academic Press, London.

    MATH  Google Scholar 

  • Sharpe, W. F., Alexander, G. J., and Bailey, J. V. (1999) Investments, 6th ed., Prentice-Hall, Upper Saddle River, NJ.

    Google Scholar 

  • Zivot, E., and Wang, J. (2006) Modeling Financial Time Series with S-PLUS, 2nd ed., Springer, New York.

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer Science+Business Media New York

About this chapter

Cite this chapter

Ruppert, D., Matteson, D.S. (2015). Factor Models and Principal Components. In: Statistics and Data Analysis for Financial Engineering. Springer Texts in Statistics. Springer, New York, NY. https://doi.org/10.1007/978-1-4939-2614-5_18

Download citation

Publish with us

Policies and ethics