Advertisement

Retracted: Robust Training of Feedforward Neural Networks Using Combined Online/Batch Quasi-Newton Techniques

  • Hiroshi Ninomiya
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7553)

Abstract

This paper describes a robust training algorithm based on quasi-Newton process in which online and batch error functions are combined by a weighting coefficient parameter. The parameter is adjusted to ensure that the algorithm gradually changes from online to batch. Furthermore, an analogy between this algorithm and Langevin one is considered. Langevin algorithm is a gradient-based continuous optimization method incorporating Simulated Annealing concept. Neural network training is presented to demonstrate the validity of combined algorithm. The algorithm achieves more robust training and accurate generalization results than other quasi-Newton based training algorithms.

Keywords

feedforward neural network quasi-Newton method online training algorithm batch training algorithm Langevin algorithm 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Haykin, S.: Neural Networks and Learning Machines 3rd. Pearson (2009)Google Scholar
  2. 2.
    Zhang, Q.J., Gupta, K.C., Devabhaktuni, V.K.: Artificial neural networks for RF and microwave design-from theory to practice. IEEE Trans. Microwave Theory and Tech. 51, 1339–1350 (2003)CrossRefGoogle Scholar
  3. 3.
    Ninomiya, H., Wan, S., Kabir, H., Zhang, X., Zhang, Q.J.: Robust training of microwave neural network models using combined global/local optimization techniques. In: IEEE MTT-S International Microwave Symposium (IMS) Digest, pp. 995–998 (June 2008)Google Scholar
  4. 4.
    Nocedal, J., Wright, S.J.: Numerical Optimization 2nd. Springer (2006)Google Scholar
  5. 5.
    Schraudolph, N.N., Yu, J., Gunter, S.: A stochastic quasi-Newton method for online convex optimization. In: Proc. 11th Intl. Conf. Artificial Intelligence and Statistics (2007)Google Scholar
  6. 6.
    Ninomiya, H.: An improved online quasi-Newton method for robust training and its application to microwave neural network models. In: Proc. IEEE&INNS/IJCNN 2010, pp. 792–799 (July 2010)Google Scholar
  7. 7.
    Gelfand, S.B., Mitter, S.K.: Recursive stochastic algorithms for global optimization in. Rd SIAM J. Control and Optimization 29(5), 999–1018 (1991)Google Scholar
  8. 8.
    Corane, A., Marechesi, M., Martini, C., Ridella, S.: Minimizing multimodal functions of continuous variables with the Simulated Annealing algorithm. ACM Trans. Math. Soft. 13(3), 262–280 (1987)CrossRefGoogle Scholar
  9. 9.
    Rögnvaldsson, T.: On Langevin updating in multilayer perceptrons. Neur. Comp. 6(5), 916–926 (1991)CrossRefGoogle Scholar
  10. 10.
    Kwok, T.Y., Yeung, D.Y.: Objective functions for training new hidden units in constructive neural networks. IEEE Trans. Neural Networks 8(5), 630–645 (1997)CrossRefGoogle Scholar
  11. 11.
    Ma, L., Khorasani, K.: New training strategies for constructive neural networks with application to regression problems. Neural Networks 17, 589–609 (2004)CrossRefGoogle Scholar
  12. 12.
    Levy, A., Montalvo, A., Gomez, S., Galderon, A.: Topics in global optimization. Lecture Notes in Mathematics, vol. (909). Springer, New York (1981)Google Scholar
  13. 13.
    Benoudjit, N., Archambeau, C., Lendasse, A., Leel, J., Verleysen, M.: Width optimization of the Gaussian kernels in radial basis function networks. In: Proc. Eur. Symp. Artif. Neural Netw., pp. 425–432 (April 2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Hiroshi Ninomiya
    • 1
  1. 1.Department of Information ScienceShonan Institute of TechnologyFujisawaJapan

Personalised recommendations