Skip to main content

Local Factor Analysis with Automatic Model Selection: A Comparative Study and Digits Recognition Application

  • Conference paper
Artificial Neural Networks – ICANN 2006 (ICANN 2006)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 4132))

Included in the following conference series:

Abstract

A further investigation is made on an adaptive local factor analysis algorithm from Bayesian Ying-Yang (BYY) harmony learning, which makes parameter learning with automatic determination of both the component number and the factor number in each component. A comparative study has been conducted on simulated data sets and several real problem data sets. The algorithm has been compared with not only a recent approach called Incremental Mixture of Factor Analysers (IMoFA) but also the conventional two-stage implementation of maximum likelihood (ML) plus model selection, namely, using the EM algorithm for parameter learning on a series candidate models, and selecting one best candidate by AIC, CAIC, and BIC. Experiments have shown that IMoFA and ML-BIC outperform ML-AIC or ML-CAIC while the BYY harmony learning considerably outperforms IMoFA and ML-BIC. Furthermore, this BYY learning algorithm has been applied to the popular MNIST database for digits recognition with a promising performance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Redner, R.A., Walker, H.F.: Mixture densities, maximum likelihood, and the EM algorithm. SIAM Review 26, 195–239 (1984)

    Article  MATH  MathSciNet  Google Scholar 

  2. Kambhatla, N., Leen, T.K.: Fast non-linear dimension reduction. In: Advances in NIPS 6. In: Advances in NIPS, vol. 6. Morgan Kaufmann, San Francisco (1994)

    Google Scholar 

  3. Hinton, G.E., Revow, M., Dayan, P.: Recognizing handwritten digits using mixtures of Linear models. In: Advances in NIPS, vol. 7. MIT Press, Cambridge (1995)

    Google Scholar 

  4. Akaike, H.: A new look at statistical model identification. IEEE Trans. Automatic Control 19, 716–723 (1974)

    Article  MATH  MathSciNet  Google Scholar 

  5. Barron, A., Rissanen, J.: The minimum description length priciple in coding and modeling. IEEE Trans. Information Theory 44, 2743–2760 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  6. Bozdogan, H.: Model selection and Akaike’s information criterion (AIC): the general theory and its analytical extensions. Psychometrika 52(3), 345–370 (1987)

    Article  MATH  MathSciNet  Google Scholar 

  7. Figueiredo, M.A.T., Jain, A.K.: Unsupervised learning of finite mixture models. IEEE Trans. Pattern Analysis and Machine Intelligence 24(3), 381–396 (2002)

    Article  Google Scholar 

  8. Rubin, D., Thayer, D.: EM algorithms for ML factor analysis. Psychometrika 47(1), 69–76 (1982)

    Article  MATH  MathSciNet  Google Scholar 

  9. Schwarz, G.: Estimating the dimension of a model. The Annals of Statistics 6(2), 461–464 (1978)

    Article  MATH  MathSciNet  Google Scholar 

  10. Ghahramani, Z., Beal, M.: Variational inference for Bayesian mixture of factor analysers. In: Advances in NIPS, vol. 12, pp. 449–455 (2000)

    Google Scholar 

  11. Salah, A.A., Alpaydin, E.: Incremental Mixtures of Factor Analysers. In: Proc. 17th Intl Conf. on Pattern Recognition, vol. 1, pp. 276–279 (2004)

    Google Scholar 

  12. LeCun, Y., et al.: Gradient-Based Learning Applied to Document Recognition. Proceedings of the IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  13. Xu, L.: Multisets Modeling Learning: An Unified Theory for Supervised and Unsupervised Learning. In: Proc. IEEE ICNN 1994, Orlando, Florida, June 26-July 2, vol. I, pp. 315–320 (1994) (invited talk)

    Google Scholar 

  14. Xu, L.: Temporal BYY Encoding, Markovian State Spaces, and Space Dimension Determination. IEEE Trans. Neural Networks 15(5), 1276–1295 (2004)

    Article  Google Scholar 

  15. Xu, L.: Advances on BYY harmony learning: information theoretic perspective, generalized projection geometry, and independent factor auto-determination. IEEE Trans on Neural Networks 15(4), 885–902 (2004)

    Article  Google Scholar 

  16. Xu, L.: A Unified Perspective and New Results on RHT Computing, Mixture Based Learning, and Multi-learner Based Problem Solving. To appear in a special issue of Pattern Recognition (2006)

    Google Scholar 

  17. Xu, L.: Trends on Regularization and Model Selection in Statistical Learning: A Perspective from Bayesian Ying Yang Learning. In: Duch, W., Mandziuk, J., Zurada, J.M. (eds.) Challenges to Computational Intelligence. The Springers series - Studies in Computational Intelligence. Springer, Heidelberg (2006) (in press)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Shi, L., Xu, L. (2006). Local Factor Analysis with Automatic Model Selection: A Comparative Study and Digits Recognition Application. In: Kollias, S., Stafylopatis, A., Duch, W., Oja, E. (eds) Artificial Neural Networks – ICANN 2006. ICANN 2006. Lecture Notes in Computer Science, vol 4132. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11840930_27

Download citation

  • DOI: https://doi.org/10.1007/11840930_27

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-38871-5

  • Online ISBN: 978-3-540-38873-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics