Advertisement

Pseudo-Solution of Weight Equations in Neural Networks: Application for Statistical Parameters Estimation

  • Vincent J. M. Kiki
  • Villévo Adanhounme
  • Mahouton Norbert Hounkonnou
Chapter
Part of the STEAM-H: Science, Technology, Engineering, Agriculture, Mathematics & Health book series (STEAM)

Abstract

An algebraic approach for representing multidimensional nonlinear functions by feedforward neural networks is implemented for the approximation of smooth batch data containing input–output of the hidden neurons and the final neural output of the network. The training set is associated with the adjustable parameters of the network by weight equations that may be compatible or incompatible.Then we have obtained the exact input weight of the nonlinear equations and the approximated output weight of the linear equations using the conjugate gradient method with an adaptive learning rate. Using the multi-agent system as the different rates of traders of five regions in the Republic of Benin smuggling the fuel from the Federal Republic of Nigeria and the computational neural networks, one can predict the average rates of fuel smuggling traders thinking of this activity in terms of its dangerous character and those susceptible to give up this activity, respectively. This information enables the planner or the decision-maker to compare alternative actions, to select the best one for ensuring the retraining of these traders.

Keywords

Function approximation Conjugate gradient method Adaptive training 

Notes

Acknowledgements

This work is partially supported by the ICTP through the OEA-ICMPA-Prj-15. The ICMPA is in partnership with the Daniel Iagolnitzer Foundation (DIF), France. The authors would like to thank the Konida National Foundation for Scientific Researches (KNFSR) for its financial support.

References

  1. 1.
    S. Ferrari, R.F. Stengel, Smooth function approximation using neural networks. IEEE Trans. Neural Netw. 16(1), 24–38 (2005)CrossRefGoogle Scholar
  2. 2.
    G.-B. Hwang, X. Ding, H. Zhou, R. Zhang, Extreme learning machine for regression and multiclass classification. IEEE Trans. Syst. Main Cybern. Path B Cybern. 42(2), 513–529 (2012)CrossRefGoogle Scholar
  3. 3.
    M. Wilhelm, Echantillonnage boule de neige: la méthode de sondage déterminé par les répondants. Office fédéral de la statistique, Neuchatel (2014)Google Scholar
  4. 4.
    Y. Kokkinos, G. Margaritis, Breaking ties of plurality voting in ensembles of distributing neural networks classifiers using soft max accumulation. Parallel and Distributed Processing Laboratory, Department of Applied Informatics, University of Macedonia (2014)Google Scholar
  5. 5.
    H. Hasan, S. Abdul-kareem, Human-computer interaction using vision based hand gesture recognition system: a survey, in Neural Computing and Application (NCA) (Springer, Berlin, 2014), pp. 251–261Google Scholar
  6. 6.
    M. Suárez-Fariñas et al., Local global neural networks: a new approach for nonlinear time series modeling. J. Am. Stat. Assoc. 99(468), 1092–1107 (2004)MathSciNetCrossRefGoogle Scholar
  7. 7.
    A. Poli, R.D. Jones, A neural networks model for prediction. J. Am. Stat. Assoc. 89, 117–121 (2012)CrossRefGoogle Scholar
  8. 8.
    D. Rumelhart et al., Learning representations by back-propagating errors. Nature 323, 533–536 (1986)CrossRefGoogle Scholar
  9. 9.
    P.H. Wolfe, Convergence conditions for ascend methods. SIAM Rev. 11, 226–235 (1969)MathSciNetCrossRefGoogle Scholar
  10. 10.
    E. Polak, Optimization: Algorithms and Consistent Approximations (Springer, Berlin, 1997)CrossRefGoogle Scholar
  11. 11.
    R.A. Jacobs, Increased rates of convergence through learning rate adaptation. Neural Netw. 1(4), 295–308 (1988)CrossRefGoogle Scholar
  12. 12.
    A.K. Rigler et al., Rescaling of variables in back-propagation learning. Neural Netw. 3(5), 561–573 (1990)CrossRefGoogle Scholar
  13. 13.
    A.N. Kolmogorov, On the representation of continuous function of several variables by superposition of continuous functions of one variable and addition. Dokl. Akad. Nauk SSSR 114, 953–956 (1957)MathSciNetzbMATHGoogle Scholar
  14. 14.
    K. Hornik et al., Multi-layer feeforward networks are universal approximators. Neural Netw. 2, 359–366 (1989)CrossRefGoogle Scholar
  15. 15.
    A.R. Baron, Universal approximation bounds for superposition of a sigmoidal functions. IEEE Trans. Inf. Theory 39(3), 930–945 (1993)MathSciNetCrossRefGoogle Scholar
  16. 16.
    F.L. Vassiliev, Numerical Methods for the optimization problems. Nauk, Moscow (1988)(in Russian)Google Scholar
  17. 17.
    V. Adanhounme, T.K. Dagba, S.A. Adedjouma, Neural smooth function approximation and prediction with adaptive leraning rate, in Transactions on CCI VII. Lecture Notes in Computer Science, vol. 7270 (Springer, Berlin, 2012), pp. 103–118CrossRefGoogle Scholar
  18. 18.
    D. Beklemichev, Cours de géométrie analytique et d’algèbre linéaire. Editions Mir, Moscou (1988)Google Scholar
  19. 19.
    T.K. Dagba, V. Adanhounmè, S.A. Adédjouma, Modélisation de la croissance des plantes par la méthode d’apprentissage supervisé du neurone, in Premier colloque de l’UAC des sciences, cultures et technologies, mathématiques, Abomey-Calavi, pp. 245–250 (2007)Google Scholar
  20. 20.
    J.-M. Dembelé, C. Cambier, Modélisation multi-agents de systèmes physiques: application à l’érosion cotière, in CARI’06, Cotonou, pp. 223–230 (2006)Google Scholar
  21. 21.
    T. Fourcaud, Analyse du comportement mécanique d’une plante en croissance par la méthode des éléments finis. PhD thesis, Université de Bordeaux 1, Talence (1995)Google Scholar
  22. 22.
    A. Rostand-Mathieu, Essai sur la modélisation des interactions entre la croissance et le développement d’une plante, cas du modèle greenlab. Ph.D thesis, Ecole Centrale de Paris (2006)Google Scholar
  23. 23.
    L. Wu, F.-X. Le Dimet, P. De Reffye, B.-G. Hu, A new mathematical formulation for plant structure dynamics, in CARI’06, Cotonou, Bénin, pp. 353–360 (2006)Google Scholar
  24. 24.
    S.W. Jang, M. Pomplun, H. Choi, Adaptive robust estimation of model parameters from motion vectors, in International Conference on Rought Sets and Current Trends in Computing, Banff, vol. 2005, pp. 438–441 ( 2001)Google Scholar
  25. 25.
    A. Hamosfakidis, Y. Paker, A novel hexagonal search algorithm for fast block matching motion estimation. EURASIP J. Appl. Signal Process. 2002 6, 595–600 (2002). Hindawi Publishing CorporationGoogle Scholar
  26. 26.
    F. Moschetti, A statistical approach of motion estimation. Ph.D. Thesis, Ecole Polytechnique Fédérale de Lausanne (2001)Google Scholar
  27. 27.
    S. Orlando, P. Palmerini, R. Perego, Statistical properties of transactional databases, in ACM Symposium on Applied Computing, Nicosia, Cyprus (2004)Google Scholar
  28. 28.
    C. Deng, F. Xiong, Y. Tan, Z. He, Sequential learning neural network and its application in agriculture, in IEEE International Joint Conference on Neural Networks, vol. 1, pp. 221–225 (1998)Google Scholar
  29. 29.
    P. De Reffye, C. Edelin, M. Jaeger, La modélisation de la croissance des plantes. La Recherche 20(207), 158–168 (1989)Google Scholar
  30. 30.
    A.A. Borovkov, Statistique mathématique. Editions Mir, Moscou (1987)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Vincent J. M. Kiki
    • 1
  • Villévo Adanhounme
    • 2
  • Mahouton Norbert Hounkonnou
    • 2
  1. 1.Ecole Nationale d’Economie Appliquée et de ManagementUniversité d’Abomey-CalaviCotonouBenin
  2. 2.University of Abomey-Calavi, International Chair in Mathematical Physics and Applications(ICMPA)CotonouBenin

Personalised recommendations