Advertisement

Journal of Global Optimization

, Volume 46, Issue 2, pp 307–315 | Cite as

Convexification for data fitting

  • James Ting-Ho Lo
Article

Abstract

The main results reported in this paper are two theorems concerning the use of a newtype of risk-averting error criterion for data fitting. The first states that the convexity region of the risk-averting error criterion expands monotonically as its risk-sensitivity index increases. The risk-averting error criterion is easily seen to converge to the mean squared error criterion as its risk-sensitivity index goes to zero. Therefore, the risk-averting error criterion can be used to convexify the mean squared error criterion to avoid local minima. The second main theorem shows that as the risk-sensitivity index increases to infinity, the risk-averting error criterion approaches the minimax error criterion, which is widely used for robustifying system controllers and filters.

Keywords

Convexification Global optimization Local minima Data fitting Neural network Nonlinear regression Minimax Robustifying error crition Degree of robustness 

Mathematics Subject Classification (2000)

90C30 90C31 62M45 62G08 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Basar T., Bernhard P.: H-infinity Optimal Control and Related Minimax Design Problems: A Dynamic Game Approach. 2nd edn. Birkhauser, Boston (1995)Google Scholar
  2. 2.
    Blake A., Zisserman A.: Visual Reconstruction. The MIT Press, Cambridge (1987)Google Scholar
  3. 3.
    Glover K., Doyle J.C.: State-space formulae for all stabilizing controllers that satisfy an H-infinity norm bound and relations to risk-sensitivity. Syst. Control Lett. 11, 167–172 (1988)CrossRefGoogle Scholar
  4. 4.
    Huber P.: Robust Statistics. Wiley, New York (1982)Google Scholar
  5. 5.
    Jacobson D.H.: Optimal stochastic linear systems with exponential performance criteria and their relation to deterministic games. IEEE Trans. Automat. Contr. AC-18(2), 124–131 (1973)CrossRefGoogle Scholar
  6. 6.
    Liu W.B., Floudas C.A.: A remark on the GOP algorithm for global optimization. J. Glob. Optim. 3, 519–531 (1993)CrossRefGoogle Scholar
  7. 7.
    Lo, J.T.-H., Bassu, D.: An adaptive method of training multilayer perceptrons. In: Proceedings of the 2001 International Joint Conference on Neural Networks, vol. 3, pp. 2013–2018. IEEE Xplore, The IEEE Press, Piscataway (2001)Google Scholar
  8. 8.
    Lo, J.T.-H., Bassu, D.: Robust identification of dynamic systems by neurocomputing. In: Proceedings of the 2001 International Joint Conference on Neural Networks, vol. 2, pp. 1285–1290. IEEE Xplore, The IEEE Press, Piscataway (2001)Google Scholar
  9. 9.
    Lo, J.T.-H., Bassu, D.: Robust approximation of uncertain functions where adaptation is impossible. In: Proceedings of the 2002 International Joint Conference on Neural Networks, vol. 2, pp. 1956–1961. IEEE Xplore, The IEEE Press, Piscataway (2002)Google Scholar
  10. 10.
    Lo, J.T.-H., Bassu, D.: Robust identification of uncertain dynamical systems where adaptation is impossible. In: Proceedings of the 2002 International Joint Conference on Neural Networks, vol. 2, pp. 1558–1563. IEEE Xplore, The IEEE Press, Piscataway (2002)Google Scholar
  11. 11.
    Speyer J., Deyst J., Jacobson D.H.: Optimization of stochastic linear systems with additive measurement and process noise using exponential performance criteria. IEEE Trans. Automat. Contr. AC-19, 358–366 (1974)CrossRefGoogle Scholar
  12. 12.
    Whittle P.: Risk Sensitive Optimal Control. Wiley, New York (1990)Google Scholar
  13. 13.
    Zlobec S.: On the Liu–Floudas convexification of smooth programs. J. Glob. Optim. 32, 401–407 (2005)CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC. 2009

Authors and Affiliations

  1. 1.Department of Mathematics and StatisticsUniversity of Maryland Baltimore CountyBaltimoreUSA

Personalised recommendations