Skip to main content

Quasi-Newton Learning Methods for Quaternion-Valued Neural Networks

  • Conference paper
  • First Online:
Advances in Computational Intelligence (IWANN 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10305))

Included in the following conference series:

Abstract

This paper presents the deduction of the quasi-Newton learning methods for training quaternion-valued feedforward neural networks, using the framework of the HR calculus. Since these algorithms yielded better training results than the gradient descent for the real- and complex-valued cases, an extension to the quaternion-valued case is a natural idea to enhance the performance of quaternion-valued neural networks. Experiments done on four time series prediction applications show a significant improvement over the quaternion gradient descent algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Arena, P., Fortuna, L., Muscato, G., Xibilia, M.G.: Neural Networks in Multidimensional Domains Fundamentals and New Trends in Modelling and Control. Lecture Notes in Control and Information Sciences, vol. 234. Springer, London (1998)

    Book  MATH  Google Scholar 

  2. Isokawa, T., Kusakabe, T., Matsui, N., Peper, F.: Quaternion neural network and its application. In: Palade, V., Howlett, R.J., Jain, L. (eds.) KES 2003. LNCS, vol. 2774, pp. 318–324. Springer, Heidelberg (2003). doi:10.1007/978-3-540-45226-3_44

    Chapter  Google Scholar 

  3. Kusamichi, H., Isokawa, T., Matsui, N., Ogawa, Y., Maeda, K.: A new scheme for color night vision by quaternion neural network. In: International Conference on Autonomous Robots and Agents, pp. 101–106, December 2004

    Google Scholar 

  4. Buchholz, S., Le Bihan, N.: Polarized signal classification by complex and quaternionic multi-layer perceptrons. Int. J. Neural Syst. 18(2), 75–85 (2008)

    Article  Google Scholar 

  5. Jahanchahi, C., Took, C., Mandic, D.: On HR calculus, quaternion valued stochastic gradient, and adaptive three dimensional wind forecasting. In: International Joint Conference on Neural Networks (IJCNN), pp. 1–5. IEEE, July 2010

    Google Scholar 

  6. Took, C., Mandic, D., Aihara, K.: Quaternion-valued short term forecasting of wind profile. In: International Joint Conference on Neural Networks (IJCNN), pp. 1–6. IEEE, July 2010

    Google Scholar 

  7. Took, C., Strbac, G., Aihara, K., Mandic, D.: Quaternion-valued short-term joint forecasting of three-dimensional wind and atmospheric parameters. Renew. Energy 36(6), 1754–1760 (2011)

    Article  Google Scholar 

  8. Nocedal, J., Wright, S.: Numerical Optimization. Springer Series in Operations Research, Springer New York (1999)

    Book  MATH  Google Scholar 

  9. Watrous, R.: Learning algorithms for connectionist networks: applied gradient methods of nonlinear optimization. Technical reports (CIS) MS-CIS-88-62, University of Pennsylvania, July 1988

    Google Scholar 

  10. Barnard, E.: Optimization for training neural nets. IEEE Trans. Neural Netw. 3(2), 232–240 (1992)

    Article  Google Scholar 

  11. Popa, C.A.: Quasi-newton learning methods for complex-valued neural networks. In: International Joint Conference on Neural Networks (IJCNN). IEEE, July 2015

    Google Scholar 

  12. Xu, D., Xia, Y., Mandic, D.: Optimization in quaternion dynamic systems: gradient, Hessian, and learning algorithms. IEEE Trans. Neural Netw. Learn. Syst. 27(2), 249–261 (2016)

    Article  MathSciNet  Google Scholar 

  13. Luenberger, D., Ye, Y.: Linear and Nonlinear Programming. International Series in Operations Research & Management Science, vol. 116. Springer, Heidelberg (2008)

    MATH  Google Scholar 

  14. Fletcher, R., Powell, M.: A rapidly convergent descent method for minimization. Comput. J. 6(2), 163–168 (1963)

    Article  MathSciNet  MATH  Google Scholar 

  15. Shanno, D.: Conditioning of quasi-newton methods for function minimization. Math. Comput. 24(111), 647–656 (1970)

    Article  MathSciNet  MATH  Google Scholar 

  16. Battiti, R.: First and second-order methods for learning between steepest descent and Newton’s method. Neural Comput. 4(2), 141–166 (1992)

    Article  Google Scholar 

  17. Mandic, D., Chambers, J.: Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability. Wiley, New York (2001)

    Book  Google Scholar 

  18. Goh, S., Mandic, D.: A complex-valued RTRL algorithm for recurrent neural networks. Neural Comput. 16(12), 2699–2713 (2004)

    Article  MATH  Google Scholar 

  19. Goh, S., Mandic, D.: Nonlinear adaptive prediction of complex-valued signals by complex-valued PRNN. IEEE Trans. Signal Process. 53(5), 1827–1836 (2005)

    Article  MathSciNet  Google Scholar 

  20. Goh, S., Mandic, D.: Stochastic gradient-adaptive complex-valued nonlinear neural adaptive filters with a gradient-adaptive step size. IEEE Trans. Neural Netw. 18(5), 1511–1516 (2007)

    Article  Google Scholar 

  21. Goh, S., Mandic, D.: An augmented CRTRL for complex-valued recurrent neural networks. Neural Netw. 20(10), 1061–1066 (2007)

    Article  MATH  Google Scholar 

  22. Xia, Y., Jelfs, B., Van Hulle, M., Principe, J., Mandic, D.: An augmented echo state network for nonlinear adaptive filtering of complex noncircular signals. IEEE Trans. Neural Netw. 22(1), 74–83 (2011)

    Article  Google Scholar 

  23. Took, C., Mandic, D.: A quaternion widely linear adaptive filter. IEEE Trans. Signal Process. 58(8), 4427–4431 (2010)

    Article  MathSciNet  Google Scholar 

  24. Wang, M., Took, C., Mandic, D.: A class of fast quaternion valued variable stepsize stochastic gradient learning algorithms for vector sensor processes. In: International Joint Conference on Neural Networks (IJCNN), pp. 2783–2786. IEEE, August 2011

    Google Scholar 

  25. Ujang, C.B., Took, C., Mandic, D.: Quaternion-valued nonlinear adaptive filtering. IEEE Trans. Neural Netw. 22(8), 1193–1206 (2011)

    Article  Google Scholar 

  26. Xia, Y., Jahanchahi, C., Mandic, D.: Quaternion-valued echo state networks. IEEE Trans. Neural Netw. Learn. Syst. 26(4), 663–673 (2015)

    Article  MathSciNet  Google Scholar 

  27. Buchholz, S., Sommer, G.: Quaternionic spinor MLP. In: European Symposium on Artificial Neural Networks, pp. 377–382, April 2000

    Google Scholar 

  28. Took, C., Mandic, D.: The quaternion lms algorithm for adaptive filtering of hypercomplex processes. IEEE Trans. Signal Process. 57(4), 1316–1327 (2009)

    Article  MathSciNet  Google Scholar 

  29. Took, C., Mandic, D., Benesty, J.: Study of the quaternion LMS and four-channel LMS algorithms. In: International Conference on Acoustics, Speech and Signal Processing, pp. 3109–3112. IEEE, April 2009

    Google Scholar 

  30. Che Ujang, B., Took, C., Mandic, D.: On quaternion analyticity: enabling quaternion-valued nonlinear adaptive filtering. In: International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2117–2120. IEEE, March 2012

    Google Scholar 

  31. Arena, P., Baglio, S., Fortuna, L., Xibilia, M.: Chaotic time series prediction via quaternionic multilayer perceptrons. In: International Conference on Systems, Man and Cybernetics, vol. 2, pp. 1790–1794. IEEE (1995)

    Google Scholar 

  32. Arena, P., Fortuna, L., Muscato, G., Xibilia, M.: Multilayer perceptrons to approximate quaternion valued functions. Neural Netw. 10(2), 335–342 (1997)

    Article  Google Scholar 

  33. Ujang, C.B., Took, C., Mandic, D.: Split quaternion nonlinear adaptive filtering. Neural Netw. 23(3), 426–434 (2010)

    Article  Google Scholar 

  34. Took, C., Mandic, D.: Quaternion-valued stochastic gradient-based adaptive iir filtering. IEEE Trans. Signal Process. 58(7), 3895–3901 (2010)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Călin-Adrian Popa .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Popa, CA. (2017). Quasi-Newton Learning Methods for Quaternion-Valued Neural Networks. In: Rojas, I., Joya, G., Catala, A. (eds) Advances in Computational Intelligence. IWANN 2017. Lecture Notes in Computer Science(), vol 10305. Springer, Cham. https://doi.org/10.1007/978-3-319-59153-7_32

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-59153-7_32

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-59152-0

  • Online ISBN: 978-3-319-59153-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics