Advertisement

Automatic Closed Modeling of Multiple Variable Systems Using Soft Computation

  • Angel Kuri-Morales
  • Alejandro Cartas-Ayala
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10632)

Abstract

One of the most interesting goals in engineering and the sciences is the mathematical representation of physical, social and other kind of complex phenomena. This goal has been attempted and, lately, achieved with different machine learning (ML) tools. ML owes much of its present appeal to the fact that it allows to model complex phenomena without the explicit definition of the form of the model. Neural networks and support vector machines exemplify such methods. However, in most of the cases, these methods yield “black box” models, i.e. input and output correspond to the phenomena under scrutiny but it is very difficult (or outright impossible) to discern the interrelation of the input variables involved. In this paper we address this problem with the explicit aim of targeting on models which are closed in nature, i.e. the aforementioned relation between variables is explicit. In order to do this, in general, the only assumption regarding the data is that they be approximately continuous. In such cases it is possible to represent the system with polynomial expressions. To be able to do so one must define the number of monomials, the degree of every variable in every monomial and the coefficients associated. We model sparse data systems with an algorithm minimizing the min-max norm. From mathematical and experimental evidence we are able to set a bound on the number of terms and degrees of the approximating polynomials. Thereafter, a genetic algorithm (GA) identifies the coefficients which correspond to the terms and degrees defined as above.

Keywords

Mathematical modeling Machine learning Multivariate regression Genetic algorithms 

References

  1. 1.
    Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. In: Rumelhart, D., McClelland, J., The PDP Research Group (eds.) Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 1: Foundations. MIT Press, Cambridge (1986)Google Scholar
  2. 2.
    Haykin, S.: Neural Networks and Learning Machines, 3rd edn. Prentice Hall, Upper Saddle River (2009). ISBN-13: 978-0-13-147139-9. Chap. 4, Multilayer PerceptronsGoogle Scholar
  3. 3.
    Powell, M.J.D.: The theory of radial basis functions. In: Light, W. (ed.) Advances in Numerical Analysis II: Wavelets, Subdivision, and Radial Basis Functions. King Fahd University of Petroleum & Minerals, Dhahran (1992)Google Scholar
  4. 4.
    Haykin, S.: op. cit, Chap. 5, Kernel Methods and Radial Basis Functions (2009)Google Scholar
  5. 5.
    Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20 (1995). http://www.springerlink.com/content/k238jx04hm87j80g/zbMATHGoogle Scholar
  6. 6.
    Haykin, S.: op. cit, Chap. 6, Support Vector Machines (2009)Google Scholar
  7. 7.
    MacKay, D.: Information Theory, Inference, and Learning Algorithms. Cambridge University Press, Cambridge (2004). ISBN 0-521-64298-1Google Scholar
  8. 8.
    Ratkowsky, D.: Handbook of Nonlinear Regression Models. Marcel Dekker, Inc., New York (1990). Library of Congress QA278.2 .R369Google Scholar
  9. 9.
    Beckermann, B.: The condition number of real Vandermonde, Krylov and positive definite Hankel matrices. Numer. Math. 85(4), 553–577 (2000).  https://doi.org/10.1007/PL00005392MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Meyer, C.: Matrix Analysis and Applied Linear Algebra. Society for Industrial and Applied Mathematics (SIAM), Philadelphia (2001). ISBN 978-0-89871-454-8Google Scholar
  11. 11.
    Kuri Morales, A.: Training neural networks using non-standard norms – preliminary results. In: Cairó, O., Sucar, L.E., Cantu, F.J. (eds.) MICAI 2000. LNCS (LNAI), vol. 1793, pp. 350–364. Springer, Heidelberg (2000).  https://doi.org/10.1007/10720076_32CrossRefGoogle Scholar
  12. 12.
    Cybenko, G.: Approximation by superpositions of a sigmoidal function. Math. Control Signals Syst. 2, 303–314 (1989)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Bishop, E.: A generalization of the Stone-Weierstrass theorem. Pac. J. Math. 11(3), 777–783 (1961)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Koornwinder, T., Wong, R., Koekoek, R., Swarttouw, R.: Orthogonal polynomials. In: Olver, F., Lozier, D., Boisvert, R., et al. (eds.) NIST Handbook of Mathematical Functions. Cambridge University Press, Cambridge (2010). ISBN 978-0521192255Google Scholar
  15. 15.
    Scheid, F.: Schaum’s Outline of Numerical Analysis (1968). ISBN 07-055197-9. Chap. 21, Least Squares Polynomial ApproximationGoogle Scholar
  16. 16.
    Kuri-Morales, A., Aldana-Bobadilla, E.: The best genetic algorithm I. In: Castro, F., Gelbukh, A., González, M. (eds.) MICAI 2013. LNCS (LNAI), vol. 8266, pp. 1–15. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-45111-9_1CrossRefzbMATHGoogle Scholar
  17. 17.
    Kuri-Morales, A.F., Aldana-Bobadilla, E., López-Peña, I.: The best genetic algorithm II. In: Castro, F., Gelbukh, A., González, M. (eds.) MICAI 2013. LNCS (LNAI), vol. 8266, pp. 16–29. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-45111-9_2CrossRefGoogle Scholar
  18. 18.
    Cheney, E.W.: Introduction to Approximation Theory, pp. 34–45. McGraw-Hill Book Company (1966)Google Scholar
  19. 19.
    Bache, K., Lichman, M.: UCI Machine Learning Repository. University of California, School of Information and Computer Science, Irvine (2013). http://archive.ics.uci.edu/ml
  20. 20.
  21. 21.
    Kuri-Morales, A., Cartas-Ayala, A.: Polynomial multivariate approximation with genetic algorithms. In: Sokolova, M., van Beek, P. (eds.) AI 2014. LNCS (LNAI), vol. 8436, pp. 307–312. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-06483-3_30CrossRefGoogle Scholar
  22. 22.
    Moré, J.J.: The Levenberg-Marquardt algorithm: implementation and theory. In: Watson, G.A. (ed.) Numerical Analysis. LNM, vol. 630, pp. 105–116. Springer, Heidelberg (1978).  https://doi.org/10.1007/BFb0067700CrossRefGoogle Scholar
  23. 23.
    Powell, M.J.D.: A fast algorithm for nonlinearly constrained optimization calculations. In: Watson, G.A. (ed.) Numerical Analysis. LNM, vol. 630, pp. 144–157. Springer, Heidelberg (1978).  https://doi.org/10.1007/BFb0067703CrossRefGoogle Scholar
  24. 24.
    Liang, K.-Y., Zeger, S.L., Qaqish, B.: Multivariate regression analyses for categorical data. J. R. Stat. Soc. Ser. B (Methodol.) 54, 3–40 (1992)MathSciNetzbMATHGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Instituto Tecnológico Autónomo de MéxicoMexico, D.F.Mexico

Personalised recommendations