Higher-Order Computational Model for Novel Neurons

  • Bipin Kumar TripathiEmail author
Part of the Studies in Computational Intelligence book series (SCI, volume 571)


Artificial neural network (ANN) has attracted a tremendous amount of interest for the solution of many complicated engineering and real-life problems. A small complexity, quick convergence, and robust performance are vital for its extensive applications. These features are pertinent upon the architecture of the basic working unit or neuron model, used in neural network. The computational capability of a neuron governs the architectural complexity of its neural network, which in turn defines the number of nodes and connections. Therefore, it is imperative to look for some neuron models, which yield ANN having small complexity in terms of network topology, number of learning parameters (connection weights) and at the same time they should possess fast learning, and superior functional capabilities. The conventional artificial neurons compute its internal state as the sum of contributions (aggregation) from impinging signals. For a neuron to respond strongly toward correlation among inputs, one must include higher-order relation among a set of inputs in their aggregation. A wide survey into design of artificial neurons brings out the fact that a higher-order neuron may generate an ANN which can have better classification and functional mapping capabilities with comparatively less number of neurons. Adequate functionality of ANN in a complex domain has also been observed in recent researches. This chapter presents higher-order computational models for novel neurons with well-defined learning procedures. Their implementation in a complex domain will provide a powerful scheme for learning input/output mapping in complex as well as in real domain along with better accuracy in wide spectrum of applications. The real domain implementation may be realized as its special case. The purpose of investigation in this chapter is to present the suitability and sustainability of higher-order neurons for readers, which can serve as a basis of the formulation for powerful ANN.


Artificial Neural Network Hide Layer Neuron Model Synaptic Input Aggregation Function 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Koch, C., Poggio, T.: Multiplying with synapses and neurons. In: McKenna, T., Davis, J., Zornetzer, S.F. (eds.) Single Neuron Computation, pp. 315–345. Academic, Boston, MA (1992)Google Scholar
  2. 2.
    Mel, B.W.: Information processing in dendritic trees. Neural Comput. 6, 1031–1085 (1995)CrossRefGoogle Scholar
  3. 3.
    Arcas, B.A., Fairhall, A.L., Bialek, W.: What can a single neuron compute? In: Leen, T., Dietterich, T., Tresp, V. (eds.) Advances in Neural Information Processing, pp. 75–81. MIT press, Cambridge (2001)Google Scholar
  4. 4.
    McCulloch, W.S., Pitts, W.: A logical calculation of the ideas immanent in nervous activity. Bull. Math. Biophys. 5, 115–133 (1943)CrossRefzbMATHMathSciNetGoogle Scholar
  5. 5.
    Koch, C.: Biophysics of Computation: Information Processing in Single Neurons. Oxford University Press, New York (1999)Google Scholar
  6. 6.
    Mel, B.W., Koch, C.: Sigma-pi learning : on radial basis functions and cortical associative learning. In: Touretzky, D.S. (ed.) Advances in Neural Information Processing Systems 2, pp. 474–481. Morgan-Kaufmann, San Mateo, CA (1990)Google Scholar
  7. 7.
    Durbin, R., Rumelhart, R.: Product units: a computationally powerful and biologically plausible extension to backpropagation networks. Neural Comput. 1, 133–142 (1989)CrossRefGoogle Scholar
  8. 8.
    Bukovsky, I., Bila, J., Gupta, M.M., Hou, Z.G., Homma, N.: Foundation and classification of nonconventional neural units and paradigm of nonsynaptic neural interaction. In: Wang, Y. (ed.) (University of Calgary, Canada) Discoveries and Breakthroughs in Cognitive Informatics and Natural Intelligence (in the ACINI book series). IGI, Hershey PA, USA (ISBN: 978-1-60566-902-1) (2009)Google Scholar
  9. 9.
    Taylor, J.G., Commbes, S.: Learning higher order correlations. Neural Netw. 6, 423–428 (1993)CrossRefGoogle Scholar
  10. 10.
    Cotter, N.E.: The Stone-Weierstrass theorem and its application to neural networks. IEEE Trans. Neural Netw. 1, 290–295 (1990)Google Scholar
  11. 11.
    Shin, Y., Ghosh, J.: The Pi-sigma Network: an efficient higher-order neural network for pattern classification and function approximation. Proceedings of the International Joint Conference on Neural Networks, pp. 13–18 (1991)Google Scholar
  12. 12.
    Heywood, M., Noakes, P.: A framework for improved training of Sigma-Pi networks. IEEE Trans. Neural Netw. 6, 893–903 (1996)Google Scholar
  13. 13.
    Chen, M.S., Manry, M.T.: Conventional modeling of the multilayer perceptron using polynomial basis functions. IEEE Trans. Neural Netw. 4, 164–166 (1993)Google Scholar
  14. 14.
    Anthony, A., Holden, S.B.: Quantifying generalization in linearly weighted neural networks. Complex Syst. 18, 91–114 (1994)MathSciNetGoogle Scholar
  15. 15.
    Chen, S., Billings, S.A.: Neural networks for nonlinear dynamic system modeling and identification. Int. J. Contr. 56(2), 319–346 (1992)CrossRefzbMATHMathSciNetGoogle Scholar
  16. 16.
    Schmidt, W., Davis, J.: Pattern recognition properties of various feature spaces for higher order neural networks. IEEE Trans. Pattern Anal. Mach. Intell. 15, 795–801 (1993)CrossRefGoogle Scholar
  17. 17.
    Kosmatopoulos, E., Polycarpou, M., Christodoulou, M., Ioannou, P.: High-order neural network structures for identification of dynamical systems. IEEE Trans. Neural Netw. 6(2), 422–431 (1995)Google Scholar
  18. 18.
    Liu, G.P., Kadirkamanathan, V., Billings, S.A.: On-line identification of nonlinear systems using volterra polynomial basis function neural networks. Neural Netw. 11(9), 1645–1657 (1998)CrossRefGoogle Scholar
  19. 19.
    Elder, J.F., Brown D.E.: Induction and polynomial networks. In: Fraser, M.D. (ed.) Network Models for Control and Processing, pp. 143–198. Intellect Books, Exeter, UK (2000)Google Scholar
  20. 20.
    Bukovsky, I., Redlapalli, S., Gupta, M.M.: Quadratic and cubic neural units for identification and fast state feedback control of unknown non-linear dynamic systems. Fourth International Symposium on Uncertainty Modeling and Analysis ISUMA 2003, pp. 330–334 (ISBN 0-7695-1997-0). IEEE Computer Society, Maryland, USA (2003)Google Scholar
  21. 21.
    Hou, Z.G., Song, K.Y., Gupta, M.M., Tan, M.: Neural units with higher-order synaptic operations for robotic image processing applications. Soft Comput. 11(3), 221–228 (2007)CrossRefGoogle Scholar
  22. 22.
    Nikolaev, N.Y., Iba, H.: Adaptive Learning of Polynomial Networks: Genetic Programming, Backpropagation and Bayesian Methods (ISBN: 0-387-31239-0, series: Genetic and Evolutionary Computation), vol. XIV, p. 316. Springer, New York (2006)Google Scholar
  23. 23.
    Zhang, M. (ed.) (Christopher Newport University): Artificial Higher Order Neural Networks for Economics and Business (ISBN: 978-1-59904-897-0). IGI-Global, Hershey, USA (2008)Google Scholar
  24. 24.
    Rosenblatt, F.: The perceptron : a probabilistic model for information storage and organization in the brain. Psychol. Rev. 65, 231–237 (1958)MathSciNetGoogle Scholar
  25. 25.
    Tripathi, B.K., Kalra, P.K.: On efficient learning machine with root power mean neuron in complex domain 22(5), 727–738, (ISSN: 1045-9227). IEEE Trans. Neural Netw. (2011)Google Scholar
  26. 26.
    Tripathi, B.K., Kalra, P.K.: Complex generalized-mean neuron model and its applications. Appl. Soft Comput. 11(1), 768–777 (Elsevier Science) (2011)Google Scholar
  27. 27.
    Tripathi, B.K., Kalra, P.K.: Functional mapping with complex higher order compensatory neuron model. World Congress on Computational Intelligence (WCCI-2010), July 18–23. IEEE Xplore, Barcelona, Spain (2010). Proc. IEEE Int. Joint Conf. Neural Netw. 22(5), 727–738 (ISSN: 1098–7576) (2011)Google Scholar
  28. 28.
    Hirose, A.: Complex-Valued Neural Networks. Springer, New York (2006)CrossRefzbMATHGoogle Scholar
  29. 29.
    Piazza, F., Benvenuto, N.: On the complex backpropagation algorithm. IEEE Trans. Signal Process. 40(4), 967–969 (1992)Google Scholar
  30. 30.
    Kim, T., Adali, T.: Approximation by fully complex multilayer perceptrons. Neural Comput. 15, 1641–1666 (2003)CrossRefzbMATHGoogle Scholar
  31. 31.
    Shin, Y., Jin, K.-S., Yoon, B.-Y.: A complex pi-sigma network and its application to equalization of nonlinear satellite channels. In: Proceedings of the IEEE International Conference on Neural Networks (1997)Google Scholar
  32. 32.
    Nitta, T.: An extension of the back-propagation algorithm to complex numbers. Neural Netw. 10(8), 1391–1415 (1997)CrossRefGoogle Scholar
  33. 33.
    Nitta, T.: An analysis of the fundamental structure of complex-valued neurons. Neural Process. Lett. 12, 239–246 (2000)CrossRefzbMATHGoogle Scholar
  34. 34.
    Tripathi, B.K., Kalra, P.K.: The novel aggregation function based neuron models in complex domain. Soft Comput. 14(10), 1069–1081 (Springer) (2010)Google Scholar
  35. 35.
    Piegat, A.: Fuzzy Modeling and Control. Springer, Heidelberg (2001)CrossRefzbMATHGoogle Scholar
  36. 36.
    Dyckhoff, H., Pedrycz, W.: Generalized means as model of compensative connectives. Fuzzy Sets Syst. 14, 143–154 (1984)CrossRefzbMATHMathSciNetGoogle Scholar
  37. 37.
    Lee, C.C., Chung, P.C., Tsai, J.R., Chang, C.I.: Robust radial basis function neural network. IEEE Trans. Syst. Man Cybern. B Cybern. 29(6), 674–685 (1999)Google Scholar
  38. 38.
    Dubois, D., Prade, H.: A review of fuzzy set aggregation connectives. Inf. Sci. 36(1–2), 85–121 (1985)CrossRefzbMATHMathSciNetGoogle Scholar
  39. 39.
    Kolmogoroff, A.N.: Sur la notion de la moyenne. Acad. Naz. Lincei Mem. Cl. Sci. Fis. Mat. Natur. Sez. 12, 388–391 (1930)zbMATHGoogle Scholar
  40. 40.
    Nagumo, M.: Uber eine Klasse der Mittelwerte. Japan. J. Math. 7, 71–79 (1930)zbMATHGoogle Scholar
  41. 41.
    Schmitt, M.: On the complexity of computing and learning with multiplicative neural networks. Neural Comput. 14, 241–301 (2001)Google Scholar
  42. 42.
    Shiblee, Md., Chandra, B., Kalra, P.K.: New neuron model for blind source separation. In: Proceedings of the International Conference on Neural Information Processing, November 25–28 (2008)Google Scholar
  43. 43.
    Georgiou, G.M.: Exact interpolation and learning in quadratic neural networks. In Proceedings IJCNN, Vancouver, BC, Canada, July 16–21 (2006)Google Scholar
  44. 44.
    Foggel, D.B.: An information criterion for optimal neural network selection. IEEE Trans. Neural Netw. 2(5), 490–497 (1991)Google Scholar
  45. 45.
    Blake, C.L., Merz, C.J.: UCI repository of machine learning database. University of California, Department of Information and Computer Science (1998)
  46. 46.
    Daugman, J.G.: Entropy reduction and decorrelation in visual coding by oriented neural receptive fields. IEEE Trans. Biomed. Eng. 36(1), 107–114 (1989)CrossRefGoogle Scholar
  47. 47.
    Shin, Y., Ghosh, J.: Ridge polynomial networks. IEEE Trans. Neural Netw. 6(3), 610–622 (1995)Google Scholar
  48. 48.
    Li, C.-K.: A sigma-pi-sigma neural network. Neural Process. Lett. 17, 1–19 (2003)CrossRefGoogle Scholar

Copyright information

© Springer India 2015

Authors and Affiliations

  1. 1.Computer Science and EngineeringHarcourt Butler Technological InstituteKanpurIndia

Personalised recommendations