Skip to main content

Dual Purpose for Principal and Minor Component Analysis

  • Chapter
  • First Online:
Principal Component Analysis Networks and Algorithms

Abstract

The PS is a subspace spanned by all eigenvectors associated with the principal eigenvalues of the autocorrelation matrix of a high-dimensional vector sequence, and the subspace spanned by all eigenvectors associated with the minor eigenvalues is called the MS.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Feng, D. Z., Zheng, W. X., & Jia, Y. (2005). Neural network learning algorithms for tracking minor subspace in high-dimensional data stream. IEEE Transactions on Neural Networks, 16(3), 513–521.

    Article  Google Scholar 

  2. Bannour, S., & Azimi-Sadjadi, R. (1995). Principal component extraction using recursive least squares learning. IEEE Transactions on Neural Networks, 6(2), 457–469.

    Article  Google Scholar 

  3. Cichocki, A., Kasprzak, W., & Skarbek, W. (1996). Adaptive learning algorithm for principal component analysis with partial data. Cybernetics Systems, 2, 1014–1019.

    Article  Google Scholar 

  4. Kung, S., Diamantaras, K., & Taur, J. (1994). Adaptive principal component extraction (APEX) and applications. IEEE Transactions on Signal Processing, 42(5), 1202–1217.

    Article  Google Scholar 

  5. Möller, R., & Könies, A. (2004). Coupled principal component analysis. IEEE Transactions on Neural Networks, 15(1), 214–222.

    Google Scholar 

  6. Oja, E. (1982). A simplified neuron mode as a principal component analyzer. Journal of Mathematics Biology, 15(3), 167–273.

    Article  MATH  Google Scholar 

  7. Ouyang, S., Bao, Z., & Liao, G. S. (2000). Robust recursive least squares learning algorithm for principal component analysis. IEEE Transactions on Neural Networks, 11(1), 215–221.

    Article  Google Scholar 

  8. Sanger, T. D. (1989). Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Networks, 2(6), 459–473.

    Article  Google Scholar 

  9. Xu, L. (1993). Least mean square error reconstruction principle for selforganizing neural-nets. Neural Networks, 6(5), 627–648.

    Article  Google Scholar 

  10. Xu, L., Oja, E., & Suen, C. (1992). Modified Hebbian learning for curve and surface fitting. Neural Networks, 5(3), 441–457.

    Article  Google Scholar 

  11. Oja, E. (1992). Principal component, minor component and linear neural networks. Neural Networks, 5(6), 927–935.

    Article  Google Scholar 

  12. Feng, D. Z., Bao, Z., & Jiao, L. C. (1998). Total least mean squares algorithm. IEEE Transactions on Signal Processing, 46(6), 2122–2130.

    Article  Google Scholar 

  13. Luo, F. L., & Unbehauen, R. (1997). A minor subspace analysis algorithm. IEEE Transactions on Neural Networks, 8(5), 1149–1155.

    Article  Google Scholar 

  14. Cirrincione, G., Cirrincione, M., Herault, J., & Van Huffel, S. (2002). The MCA EXIN neuron for the minor component analysis. IEEE Transactions on Neural Networks, 13(1), 160–187.

    Article  Google Scholar 

  15. Ouyang, S., Bao, Z., Liao, G. S., & Ching, P. C. (2001). Adaptive minor component extraction with modular structure. IEEE Transactions on Signal Processing, 49(9), 2127–2137.

    Article  Google Scholar 

  16. Zhang, Q., & Leung, Y. W. (2000). A class of learning algorithms for principal component analysis and minor component analysis. IEEE Transactions on Neural Networks, 11(2), 529–533.

    Article  Google Scholar 

  17. Möller, R. (2004). A self-stabilizing learning rule for minor component analysis. International Journal of Neural System, 14(1), 1–8.

    Google Scholar 

  18. Douglas, S. C., Kung, S. Y., & Amari, S. (2002). A self-stabilized minor subspace rule. IEEE Signal Processing Letter, 5(12), 1342–1352.

    Google Scholar 

  19. Oja, E. (1989). Neural networks, principal components, and subspaces. International Journal of Neural Systems, 1(1), 61–68.

    Article  Google Scholar 

  20. Williams, R. J. (1985). Feature discovery through error-correction learning. Institute of Cognition Science, University of California, San Diego, Technical Report, 8501.

    Google Scholar 

  21. Baldi, P. (1989). Linear learning: Landscapes and algorithms. In D. S. Touretzky (Ed.), Advances in Neural Information Processing Systems 1. SanMateo, CA: Morgan Kaufmann.

    Google Scholar 

  22. Miao, Y. F., & Hua, Y. B. (1998). Fast subspace tracking and neural network learning by a novel information criterion. IEEE Transactions on Signal Processing, 46(7), 1967–1978.

    Article  Google Scholar 

  23. Yang, B. (1995). Projection approximation subspace tracking. IEEE Transactions on Signal Processing, 43(1), 95–107.

    Article  Google Scholar 

  24. Fu, Z., & Dowling, E. M. (1995). Conjugate gradient eigenstructure tracking for adaptive spectral estimation. IEEE Transactions on Signal Processing, 43(5), 1151–1160.

    Article  Google Scholar 

  25. Mathew, G., Reddy, V. U., & Dasgupta, S. (1995). Adaptive estimation of eigensubspace. IEEE Transactions on Signal Processing, 43(2), 401–411.

    Article  Google Scholar 

  26. Chen, T. (1997). Modified Oja’s algorithms for principal and minor subspace extraction. Neural Processing Letters, 5(2), 105–110.

    Article  Google Scholar 

  27. Chen, T., & Amari, S. (2001). Unified stabilization approach to principal and minor components extraction. Neural Networks, 14(10), 1377–1387.

    Article  Google Scholar 

  28. Hasan, M. A. (2007). Self-normalizing dual systems for minor and principal component extraction. In Proceedings of the ICASSP 2007 IEEE International Conference on Acoustic, Speech and Signal Processing (Vol. 4, No. 4, pp. 885–888), April 15–20, 2007.

    Google Scholar 

  29. Peng, D. Z., Zhang, Y., & Xiang, Y. (2009). A unified learning algorithm to extract principal and minor components. Digital Signal Processing, 19(4), 640–649.

    Article  Google Scholar 

  30. Chen, T., Amari, S. I., & Lin, Q. (1998). A unified algorithm for principal and minor component extraction. Neural Networks, 11(3), 365–369.

    Article  Google Scholar 

  31. Chen, T., Amari, S. I., & Murata, N. (2001). Sequential extraction of minor components. Neural Processing Letters, 13(3), 195–201.

    Article  MATH  Google Scholar 

  32. Jonathan, H. M., Uwe, H., & Iven, M. Y. M. (2005). A dual purpose principal and minor component flow. Systems and Control Letters, 54(8), 759–769.

    Article  MathSciNet  MATH  Google Scholar 

  33. Karhunen, J., & Joutsensalo, J. (1995). Generalizations of principal component analysis, optimization problems, and neural networks. Neural Networks, 8(4), 549–562.

    Article  Google Scholar 

  34. Chatterjee, C., Kang, Z. J., & Poychowdhury, V. P. (2000). Algorithm for accelerated convergence of adaptive PCA. IEEE Transactions on Neural Networks, 11(2), 338–355.

    Article  Google Scholar 

  35. Chen, T., Hua, Y., & Yan, W. (1998). Global convergence of Oja’s subspace algorithm for principal component extraction. IEEE Transactions on Neural Networks, 9(1), 58–67.

    Article  Google Scholar 

  36. Chauvin, Y. (1989). Principal component analysis by gradient descent on a constrained linear Hebbian cell. In Proceedings of the Joint International Conference on Neural Networks (pp. 373–380). San Diego, CA.

    Google Scholar 

  37. Magnus, J. R., & Neudecker, H. (1991). Matrix differential calculus with applications in statistics and econometrics (2nd ed.). New York: Wiley.

    MATH  Google Scholar 

  38. Kushner, H. J., & Clark, D. S. (1976). Stochastic approximation methods for constrained and unconstrained systems. New York: Springer.

    Google Scholar 

  39. Ljung, L. (1977). Analysis of recursive stochastic algorithms. IEEE Transactions on Automatic Control, 22(4), 551–575.

    Article  MathSciNet  MATH  Google Scholar 

  40. Kong, X. Y., Hu, C. H., & Han, C. Z. (2012). A dual purpose principal and minor subspace gradient flow. IEEE Transactions on Signal Processing, 60(1), 197–210.

    Article  MathSciNet  Google Scholar 

  41. LaSalle, J. P. (1976). The stability of dynamical systems. Philadelphia, PA: SIAM.

    Book  MATH  Google Scholar 

  42. Kong, X. Y., Hu, C. H., & Han, C. Z. (2010). On the discrete time dynamics of a class of self-stabilizing MCA learning algorithms. IEEE Transactions on Neural Networks, 21(1), 175–181.

    Article  Google Scholar 

  43. Ji, S. H., Xue, Y., & Carin, L. (2008). Bayesian compressive sensing. IEEE Transactions on Signal Processing, 56(6), 2346–2356.

    Article  MathSciNet  Google Scholar 

  44. Kang, Z. J., Chatterjee, C., & Roychowdhury, V. P. (2000). An adaptive quasi-Newton algorithm for eigensubspace estimation. IEEE Transactions on Signal Processing, 48(12), 3328–3333.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiangyu Kong .

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Science Press, Beijing and Springer Nature Singapore Pte Ltd.

About this chapter

Cite this chapter

Kong, X., Hu, C., Duan, Z. (2017). Dual Purpose for Principal and Minor Component Analysis. In: Principal Component Analysis Networks and Algorithms. Springer, Singapore. https://doi.org/10.1007/978-981-10-2915-8_5

Download citation

  • DOI: https://doi.org/10.1007/978-981-10-2915-8_5

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-10-2913-4

  • Online ISBN: 978-981-10-2915-8

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics