Skip to main content

Learning and Unlearning in Hopfield-Like Neural Network Performing Boolean Factor Analysis

  • Chapter
  • 2208 Accesses

Part of the book series: Studies in Computational Intelligence ((SCI,volume 262))

Abstract

One of the principles used for the transformation of original signal space into a space of lower dimension is a factor analysis based on the assumption that signals are random combinations of latent factors. The goal of the factor analysis is to find factors representation in the signal space (factor loadings) and the contributions of factors into the original signals (factor scores). Recently in [10] we have proposed the general method for Boolean factor analysis based on the Hopfield-like neural network. Due to the Hebbian learning rule the neurons of factor become connected more tightly than other neurons and hence factors can be revealed as attractors of the network dynamics by the random search. The peculiarity of usage the Hopfield-like network for Boolean factor analysis is the appearance of two global spurious attractors. They become dominant and, therefore, prevent successful factors search. To eliminate these attractors we propose a special unlearning procedure. The second unlearning procedure provides the suppression of factors with the largest attraction basins which dominate after suppression of global spurious attractors and prevent the recall of other factors. The origin of the global spurious attractors and the efficiency of the unlearning procedures are investigated in the present paper.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Barlow, H.: Single units and sesation: a neuron doctrine for perceptual psychology? Perception 1, 371–394 (1972)

    Article  Google Scholar 

  2. Barlow, H.B.: Possible principles underlying the transformations of sensory messages. In: Rosenblith, W.A. (ed.) Sensory communication, pp. 217–234. MIT Press, Cambridge (1961)

    Google Scholar 

  3. Barlow, H.B.: Cerebral cortex as model builder. In: Rose, D., Dodson, V.G. (eds.) Models of the visual cortex, pp. 37–46. Wiley, Chichester (1985)

    Google Scholar 

  4. Belohlavek, R., Vychodil, V.: Formal concepts as optimal factors in Boolean factor analysis: implications and experiments? In: Fifth International Conference on Concept Lattices and Their Applications (2007)

    Google Scholar 

  5. Bucingham, J., Willshaw, D.: On setting unit thresholds in an incompletely connected associative net. Network 4, 441–459 (1993)

    Article  Google Scholar 

  6. Crick, F., Mitchison, G.: The function of dream sleep. Nature 304(5922), 111–114 (1983)

    Article  Google Scholar 

  7. Foldiak, P.: Forming sparse representations by local anti-hebbian learning. Biological Cybernetics 64, 165170 (1990)

    Google Scholar 

  8. Frolov, A.A., Husek, D., Muraviev, I.P.: Informational capacity and recall quality in sparsely encoded hopfield-like neural network: Analytical approaches and computer simulation. Neural Networks 10, 845–855 (1997)

    Article  Google Scholar 

  9. Frolov, A.A., Husek, D., Muraviev, I.P.: Informational efficiency of sparsely encoded hopfield-like autoassociative memory. Optical Memory Neural Networks 12(3), 177–197 (2003)

    Google Scholar 

  10. Frolov, A.A., Husek, D., Muraviev, I.P., Polyakov, P.Y.: Boolean factor analysis by attractor neural network. IEEE Transactions on Neural Networks 18(3), 698–707 (2007)

    Article  Google Scholar 

  11. Georgiev, P., Theis, F., Cichocki, A.: Sparse component analysis blind sourse separation of underdetermined mixters. IEEE Transactions on Neural Networks 16(4), 992–996 (2005)

    Article  Google Scholar 

  12. Goles-Chacc, E., Fogelman-Soulie, F., Pellegrin, D.: Decreasing energy functions as a tool for studying threshold networks. Discrete Mathematics 12, 261–277 (1985)

    Article  MATH  MathSciNet  Google Scholar 

  13. Jankovic, M.V.: Modulated hebb-oja learning rule - a method for principal subspace analysis. IEEE Transactions on Neural Networks 17(2), 345–356 (2006)

    Article  MathSciNet  Google Scholar 

  14. Karhunen, J.: Nonlinear independent component analysis. In: Roberts, S., Everson, R. (eds.) Independent Component Analysis: Principles and Practice, pp. 113–134. Cambridge University Press, Cambridge (2001)

    Google Scholar 

  15. Karhunen, J., Joutsensalo, J.: Representation and separation of signals using nonlinear PCA type learning. Neural Networks 7, 113–127 (1994)

    Article  Google Scholar 

  16. Leeuw, J.D.: Principal component analysis of binary data application to rollcall analysis (2003), http://gifi.stat.ucla.edu

  17. Li, Y., Amari, S., Cichocki, A., Ho, D.C., Xie, S.: Underdetermined blind source separation based on sparse representation. IEEE Trans. Signal Process. 54(2), 423–437 (2006)

    Article  Google Scholar 

  18. Li, Y., Cichocki, A., Amari, S.: Blind estimation of chanal parameters and source components for EEG signals: A sparse factorization approach. IEEE Transactions on Neural Networks 17(2), 419–431 (2006)

    Article  Google Scholar 

  19. Liu, W., Zheng, N.: Non-negative matrix factorization based methods for object recognition. Pattern Recognition Letters 25(8), 893–897 (2004)

    Article  Google Scholar 

  20. Moller, R., Konig, A.: Coupled principal component analysis. IEEE Transactions on Neural Networks 15(1), 214–222 (2006)

    Article  Google Scholar 

  21. Spratling, M.W.: Learning image components for object recognition. Journal of MachineLearning Reasearch 7, 793–815 (2006)

    MathSciNet  Google Scholar 

  22. Thurstone, L.L.: Multiple factor analysis. Psychological Review 38, 406–427 (1931)

    Article  Google Scholar 

  23. Tichavsky, P., Koldovsky, Z., Oja, E.: Performance analysis of the FastICA algorithm and Crame/spl acute/r-rao bounds for linear independent component analysis. IEEE Transactions on Signal Processing, [see also IEEE Transactions on Acoustics, Speech, and Signal Processing] 54(4), 1189–1203 (2006)

    Google Scholar 

  24. Watanabe, S.: Pattern recognition: human and mechanical. Wiley, New York (1985)

    Google Scholar 

  25. Yi, Z., Ye, M., Lv, J.C., Tan, K.K.: Convergence analysis of deterministic discrete time system of oja’s PCA learning algorithm. IEEE Transactions on Neural Networks 16(6), 1318–1328 (2005)

    Article  Google Scholar 

  26. Zafeiriou, S., Tefas, A., Bucie, I., Pitas, I.: Exploiting discriminant information in nonnegative matrix factorization with application to frontal face verification. IEEE Transactions on Neural Networks 17(3), 683–695 (2006)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Frolov, A.A., Húsek, D., Muraviev, I.P., Polyakov, P.Y. (2010). Learning and Unlearning in Hopfield-Like Neural Network Performing Boolean Factor Analysis. In: Koronacki, J., Raś, Z.W., Wierzchoń, S.T., Kacprzyk, J. (eds) Advances in Machine Learning I. Studies in Computational Intelligence, vol 262. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-05177-7_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-05177-7_26

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-05176-0

  • Online ISBN: 978-3-642-05177-7

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics