Abstract
This paper describes a novel acceleration technique of quasi-Newton method (QN) using momentum terms for training in neural networks. Recently, Nesterov’s accelerated quasi-Newton method (NAQ) has shown that the momentum term is effective in reducing the number of iterations and in accelerating its convergence speed. However, the gradients had to calculate two times during one iteration in the NAQ training. This increased the computation time of a training loop compared with the conventional QN. In this research, an improvement to NAQ is done by approximating the Nesterov’s accelerated gradient used in NAQ as a linear combination of the current and previous gradients. Then the gradient is calculated only once per iteration same as QN. The performance of the proposed algorithm is evaluated through computer simulations on a benchmark problem of the function modeling and real-world problems of the microwave circuit modeling. The results show the significant acceleration in the computation time compared with conventional training algorithms.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Cao, Y., Wang, G., Zhang, Q.J.: A new training approach for parametric modeling of microwave passive components using combined neural networks and transfer functions. IEEE Trans. Microw. Theory Tech. 57, 2727–2742 (2009)
Duchi, J., Hazan, E., Singer, Y.: Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12, 2121–2159 (2011)
Ford, J.A., Moghrabi, I.A.: Alternative parameter choices for multi-step quasi-Newton methods. Optim. Methods Softw. 2(3–4), 357–370 (1993)
Forst, W., Hoffmann, D.: Optimization - Theory and Practice. Springer Undergraduate Texts in Mathematics and Technology, 1st edn. Springer, New York (2010). https://doi.org/10.1007/978-0-387-78977-4
Haykin, S.S.: Neural Networks and Learning Machines, 3rd edn. Pearson, London (2009)
Indrapriyadarsini, S., Mahboubi, S., Ninomiya, H., Asai, H.: Implementation of a modified Nesterov’s accelerated quasi-Newton method on tensorflow. In: Proceedings of the IEEE ICMLA 2018, pp. 1147–1154 (2018)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: Proceedings of the ICLR, pp. 1–13 (2015)
Li, D.H., Fukushima, M.: A modified BFGS method and its global convergence in nonconvex minimization. J. Comput. Appl. Math. 129, 15–35 (2001)
Mahboubi, S., Ninomiya, H.: A novel quasi-Newton with momentum training for microwave circuit models using neural networks. In: Proceedings of the ICECS 2018, pp. 629–632 (2018)
Moghrabi, I.A.R.: Curvature-based quasi-Newton methods for optimization. IJPAM 119(1), 131–143 (2018)
Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Applied Optimization, vol. 87, 1st edn. Springer, Boston (2004). https://doi.org/10.1007/978-1-4419-8853-9
Ninomiya, H., Wan, S., Kabir, H., Zhang, Z., Zhang, Q.J.: Robust training of microwave neural network models using combined global/local optimization techniques. In: IEEE MTT-S International Microwave Symposium (IMS) Digest, pp. 995–998 (2008)
Ninomiya, H.: Dynamic sample size selection based quasi-Newton training for highly nonlinear function approximation using multilayer neural networks. In: Proceedings of the IEEE & INNS IJCNN 2013, pp. 1932–1937 (2013)
Ninomiya, H.: A novel quasi-Newton optimization for neural network training incorporating Nesterov’s accelerated gradient. IEICE NOLTA J. E8–N(4), 289–301 (2017)
Nocedal, J., Wright, S.J.: Numerical Optimization. Springer Series in Operations Research and Financial Engineering, 2nd edn. Springer, New York (2006). https://doi.org/10.1007/978-0-387-40065-5
Sonnet: Sonnet Software Inc. http://www.sonnetsoftware.com. Accessed 12 Mar 2019
Sutskever, I., Martens, J., Dahl, G.E., Hinton, G.E.: On the importance of initialization and momentum in deep learning. In: Proceedings of the ICML 2013 (2013)
Tieleman, T., Hinton, G.: Lecture 6.5 - RMSprop, COURSERA: neural networks for machine learning. Technical report (2012)
Zeiler, M.D.: ADADELTA: an adaptive learning rate method. arXiv print arXiv:1212.5701 (2012)
Zhang, Q.J., Gupta, K.C., Devabhaktuni, V.K.: Artifical neural networks for RF and microwave design-from theory to practice. IEEE Trans. Microw. Theory Tech. 51, 1339–1350 (2003)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Mahboubi, S., Indrapriyadarsini, S., Ninomiya, H., Asai, H. (2019). Momentum Acceleration of Quasi-Newton Training for Neural Networks. In: Nayak, A., Sharma, A. (eds) PRICAI 2019: Trends in Artificial Intelligence. PRICAI 2019. Lecture Notes in Computer Science(), vol 11671. Springer, Cham. https://doi.org/10.1007/978-3-030-29911-8_21
Download citation
DOI: https://doi.org/10.1007/978-3-030-29911-8_21
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-29910-1
Online ISBN: 978-3-030-29911-8
eBook Packages: Computer ScienceComputer Science (R0)