Skip to main content

A Comparative Study on Data Smoothing Regularization for Local Factor Analysis

  • Conference paper
Artificial Neural Networks - ICANN 2008 (ICANN 2008)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5163))

Included in the following conference series:

  • 1983 Accesses

Abstract

Selecting the cluster number and the hidden factor numbers of Local Factor Analysis (LFA) model is a typical model selection problem, which is difficult when the sample size is finite or small. Data smoothing is one of the three regularization techniques integrated in the statistical learning framework, Bayesian Ying-Yang (BYY) harmony learning theory, to improve parameter learning and model selection. In this paper, we will comparatively investigate the performance of five existing formulas to determine the hyper-parameter namely the smoothing parameter that controls the strength of data smoothing regularization. BYY learning algorithms on LFA using these formulas are evaluated by model selection accuracy on simulated data and classification accuracy on real world data. Two observations are obtained. First, learning with data smoothing works better than that without it especially when sample size is small. Second, the gradient method derived from imposing a sample set based improper prior on the smoothing parameter generally outperforms other methods such as the one from Gamma or Chi-square prior, and the one under the equal covariance principle.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Shi, L.: Bayesian Ying-Yang Harmony Learning for Local Factor Analysis: A Comarative Investigation. In: Tizhoosh, H., Ventresca, M. (eds.) Oppositional Comcepts in Computational Intelligence, Springer, Heidelberg (to appear, 2008)

    Google Scholar 

  2. Xu, L.: A unified perspective and new results on RHT computing, mixture based learning, and multi-learner based problem solving. Pattern Recognition 40(8), 2129–2153 (2007)

    Article  MATH  Google Scholar 

  3. Ghahramani, Z., Hinton, G.E.: The EM algorithm for Mixture of Factor Analyzers. Technical Report CRG-TR-96-1, Dept. of Computer Science, University of Toronto (1997)

    Google Scholar 

  4. Tipping, M.E., Bishop, C.M.: Mixtures of probabilistic principal component analyzers. Neural Computation 11(2), 443–482 (1999)

    Article  Google Scholar 

  5. Ghahramani, Z., Beal, M.J.: Variational Inference for Bayesian Mixtures of Factor Analysers. In: Advances in NIPS, pp. 449–455 (2000)

    Google Scholar 

  6. Hinton, G.E., Revow, M., Dayan, P.: Recognizing Handwritten Digits using Mixtures of Linear Models. In: Advances in NIPS, pp. 1015–1022 (1995)

    Google Scholar 

  7. Xu, L.: A Trend on Regularization and Model Selection in Statistical Learning: A Bayesian Ying Yang Learning Perspective. In: Duch, W., Mandziuk, J. (eds.) Challenges for Computational Intelligence, pp. 365–406. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  8. Xu, L.: A Unified Learning Scheme: Bayesian-Kullback Ying-Yang Machine. In: Touretzky, D.S., et al. (eds.) Advances in NIPS, vol. 8, pp. 444–450. MIT Press, Cambridge (1996); A preliminary version in Proc. ICONIP 1995, Beijing, pp. 977–988 (1995)

    Google Scholar 

  9. Xu, L.: Bayesian Ying Yang Learning. Scholarpedia 2(3), 1809 (2007), http://scholarpedia.org/article/Bayesian_Ying_Yang_Learning

    Google Scholar 

  10. Xu, L.: Bayesian Ying Yang System, Best Harmony Learning, and Gaussian Manifold Based Family. In: Zurada, J.M., Yen, G.G., Wang, J. (eds.) Computational Intelligence: Research Frontiers. LNCS, vol. 5050, pp. 48–78. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  11. Bishop, C.M.: Training with noise is equivalent to Tikhonov regularization. Neural Computation 7(1), 108–116 (1995)

    Article  MathSciNet  Google Scholar 

  12. Xu, L.: Bayesian Ying-Yang System and Theory as a Unified Statistical Learning Approach: (I)Unsupervised and Semi-unsupervised Learning. In: Amari, S., Kassabov, N. (eds.) Brain-like Computing and Intelligent Information Systems, pp. 241–274. Springer, Berlin (1997)

    Google Scholar 

  13. Xu, L.: Data smoothing regularization, multi-sets-learning, and problem solving strategies. Neural Networks 16(5-6), 817–825 (2003)

    Article  Google Scholar 

  14. Akaike, H.: A new look at the statistical model identification. IEEE Transactions on Automatic Control 19(6), 716–723 (1974)

    Article  MATH  MathSciNet  Google Scholar 

  15. Schwarz, G.: Estimating the Dimension of a Model. The Annals of Statistics 6(2), 461–464 (1978)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Véra Kůrková Roman Neruda Jan Koutník

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Tu, S., Shi, L., Xu, L. (2008). A Comparative Study on Data Smoothing Regularization for Local Factor Analysis. In: Kůrková, V., Neruda, R., Koutník, J. (eds) Artificial Neural Networks - ICANN 2008. ICANN 2008. Lecture Notes in Computer Science, vol 5163. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-87536-9_28

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-87536-9_28

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-87535-2

  • Online ISBN: 978-3-540-87536-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics