Skip to main content

Higher-Order Computational Model for Novel Neurons

  • Chapter
  • First Online:
High Dimensional Neurocomputing

Part of the book series: Studies in Computational Intelligence ((SCI,volume 571))

Abstract

Artificial neural network (ANN) has attracted a tremendous amount of interest for the solution of many complicated engineering and real-life problems. A small complexity, quick convergence, and robust performance are vital for its extensive applications. These features are pertinent upon the architecture of the basic working unit or neuron model, used in neural network. The computational capability of a neuron governs the architectural complexity of its neural network, which in turn defines the number of nodes and connections. Therefore, it is imperative to look for some neuron models, which yield ANN having small complexity in terms of network topology, number of learning parameters (connection weights) and at the same time they should possess fast learning, and superior functional capabilities. The conventional artificial neurons compute its internal state as the sum of contributions (aggregation) from impinging signals. For a neuron to respond strongly toward correlation among inputs, one must include higher-order relation among a set of inputs in their aggregation. A wide survey into design of artificial neurons brings out the fact that a higher-order neuron may generate an ANN which can have better classification and functional mapping capabilities with comparatively less number of neurons. Adequate functionality of ANN in a complex domain has also been observed in recent researches. This chapter presents higher-order computational models for novel neurons with well-defined learning procedures. Their implementation in a complex domain will provide a powerful scheme for learning input/output mapping in complex as well as in real domain along with better accuracy in wide spectrum of applications. The real domain implementation may be realized as its special case. The purpose of investigation in this chapter is to present the suitability and sustainability of higher-order neurons for readers, which can serve as a basis of the formulation for powerful ANN.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In this book, a neuron with only summation aggregation function is referred as ‘conventional’ neuron and network of these neurons as ‘MLP’.

  2. 2.

    In this book, a neuron with only summation aggregation function is referred to as a ‘conventional neuron’ and a standard feedforward neural network with these neurons is referred to as as ‘MLP’ (multilayer perceptron).

References

  1. Koch, C., Poggio, T.: Multiplying with synapses and neurons. In: McKenna, T., Davis, J., Zornetzer, S.F. (eds.) Single Neuron Computation, pp. 315–345. Academic, Boston, MA (1992)

    Google Scholar 

  2. Mel, B.W.: Information processing in dendritic trees. Neural Comput. 6, 1031–1085 (1995)

    Article  Google Scholar 

  3. Arcas, B.A., Fairhall, A.L., Bialek, W.: What can a single neuron compute? In: Leen, T., Dietterich, T., Tresp, V. (eds.) Advances in Neural Information Processing, pp. 75–81. MIT press, Cambridge (2001)

    Google Scholar 

  4. McCulloch, W.S., Pitts, W.: A logical calculation of the ideas immanent in nervous activity. Bull. Math. Biophys. 5, 115–133 (1943)

    Article  MATH  MathSciNet  Google Scholar 

  5. Koch, C.: Biophysics of Computation: Information Processing in Single Neurons. Oxford University Press, New York (1999)

    Google Scholar 

  6. Mel, B.W., Koch, C.: Sigma-pi learning : on radial basis functions and cortical associative learning. In: Touretzky, D.S. (ed.) Advances in Neural Information Processing Systems 2, pp. 474–481. Morgan-Kaufmann, San Mateo, CA (1990)

    Google Scholar 

  7. Durbin, R., Rumelhart, R.: Product units: a computationally powerful and biologically plausible extension to backpropagation networks. Neural Comput. 1, 133–142 (1989)

    Article  Google Scholar 

  8. Bukovsky, I., Bila, J., Gupta, M.M., Hou, Z.G., Homma, N.: Foundation and classification of nonconventional neural units and paradigm of nonsynaptic neural interaction. In: Wang, Y. (ed.) (University of Calgary, Canada) Discoveries and Breakthroughs in Cognitive Informatics and Natural Intelligence (in the ACINI book series). IGI, Hershey PA, USA (ISBN: 978-1-60566-902-1) (2009)

    Google Scholar 

  9. Taylor, J.G., Commbes, S.: Learning higher order correlations. Neural Netw. 6, 423–428 (1993)

    Article  Google Scholar 

  10. Cotter, N.E.: The Stone-Weierstrass theorem and its application to neural networks. IEEE Trans. Neural Netw. 1, 290–295 (1990)

    Google Scholar 

  11. Shin, Y., Ghosh, J.: The Pi-sigma Network: an efficient higher-order neural network for pattern classification and function approximation. Proceedings of the International Joint Conference on Neural Networks, pp. 13–18 (1991)

    Google Scholar 

  12. Heywood, M., Noakes, P.: A framework for improved training of Sigma-Pi networks. IEEE Trans. Neural Netw. 6, 893–903 (1996)

    Google Scholar 

  13. Chen, M.S., Manry, M.T.: Conventional modeling of the multilayer perceptron using polynomial basis functions. IEEE Trans. Neural Netw. 4, 164–166 (1993)

    Google Scholar 

  14. Anthony, A., Holden, S.B.: Quantifying generalization in linearly weighted neural networks. Complex Syst. 18, 91–114 (1994)

    MathSciNet  Google Scholar 

  15. Chen, S., Billings, S.A.: Neural networks for nonlinear dynamic system modeling and identification. Int. J. Contr. 56(2), 319–346 (1992)

    Article  MATH  MathSciNet  Google Scholar 

  16. Schmidt, W., Davis, J.: Pattern recognition properties of various feature spaces for higher order neural networks. IEEE Trans. Pattern Anal. Mach. Intell. 15, 795–801 (1993)

    Article  Google Scholar 

  17. Kosmatopoulos, E., Polycarpou, M., Christodoulou, M., Ioannou, P.: High-order neural network structures for identification of dynamical systems. IEEE Trans. Neural Netw. 6(2), 422–431 (1995)

    Google Scholar 

  18. Liu, G.P., Kadirkamanathan, V., Billings, S.A.: On-line identification of nonlinear systems using volterra polynomial basis function neural networks. Neural Netw. 11(9), 1645–1657 (1998)

    Article  Google Scholar 

  19. Elder, J.F., Brown D.E.: Induction and polynomial networks. In: Fraser, M.D. (ed.) Network Models for Control and Processing, pp. 143–198. Intellect Books, Exeter, UK (2000)

    Google Scholar 

  20. Bukovsky, I., Redlapalli, S., Gupta, M.M.: Quadratic and cubic neural units for identification and fast state feedback control of unknown non-linear dynamic systems. Fourth International Symposium on Uncertainty Modeling and Analysis ISUMA 2003, pp. 330–334 (ISBN 0-7695-1997-0). IEEE Computer Society, Maryland, USA (2003)

    Google Scholar 

  21. Hou, Z.G., Song, K.Y., Gupta, M.M., Tan, M.: Neural units with higher-order synaptic operations for robotic image processing applications. Soft Comput. 11(3), 221–228 (2007)

    Article  Google Scholar 

  22. Nikolaev, N.Y., Iba, H.: Adaptive Learning of Polynomial Networks: Genetic Programming, Backpropagation and Bayesian Methods (ISBN: 0-387-31239-0, series: Genetic and Evolutionary Computation), vol. XIV, p. 316. Springer, New York (2006)

    Google Scholar 

  23. Zhang, M. (ed.) (Christopher Newport University): Artificial Higher Order Neural Networks for Economics and Business (ISBN: 978-1-59904-897-0). IGI-Global, Hershey, USA (2008)

    Google Scholar 

  24. Rosenblatt, F.: The perceptron : a probabilistic model for information storage and organization in the brain. Psychol. Rev. 65, 231–237 (1958)

    MathSciNet  Google Scholar 

  25. Tripathi, B.K., Kalra, P.K.: On efficient learning machine with root power mean neuron in complex domain 22(5), 727–738, (ISSN: 1045-9227). IEEE Trans. Neural Netw. (2011)

    Google Scholar 

  26. Tripathi, B.K., Kalra, P.K.: Complex generalized-mean neuron model and its applications. Appl. Soft Comput. 11(1), 768–777 (Elsevier Science) (2011)

    Google Scholar 

  27. Tripathi, B.K., Kalra, P.K.: Functional mapping with complex higher order compensatory neuron model. World Congress on Computational Intelligence (WCCI-2010), July 18–23. IEEE Xplore, Barcelona, Spain (2010). Proc. IEEE Int. Joint Conf. Neural Netw. 22(5), 727–738 (ISSN: 1098–7576) (2011)

    Google Scholar 

  28. Hirose, A.: Complex-Valued Neural Networks. Springer, New York (2006)

    Book  MATH  Google Scholar 

  29. Piazza, F., Benvenuto, N.: On the complex backpropagation algorithm. IEEE Trans. Signal Process. 40(4), 967–969 (1992)

    Google Scholar 

  30. Kim, T., Adali, T.: Approximation by fully complex multilayer perceptrons. Neural Comput. 15, 1641–1666 (2003)

    Article  MATH  Google Scholar 

  31. Shin, Y., Jin, K.-S., Yoon, B.-Y.: A complex pi-sigma network and its application to equalization of nonlinear satellite channels. In: Proceedings of the IEEE International Conference on Neural Networks (1997)

    Google Scholar 

  32. Nitta, T.: An extension of the back-propagation algorithm to complex numbers. Neural Netw. 10(8), 1391–1415 (1997)

    Article  Google Scholar 

  33. Nitta, T.: An analysis of the fundamental structure of complex-valued neurons. Neural Process. Lett. 12, 239–246 (2000)

    Article  MATH  Google Scholar 

  34. Tripathi, B.K., Kalra, P.K.: The novel aggregation function based neuron models in complex domain. Soft Comput. 14(10), 1069–1081 (Springer) (2010)

    Google Scholar 

  35. Piegat, A.: Fuzzy Modeling and Control. Springer, Heidelberg (2001)

    Book  MATH  Google Scholar 

  36. Dyckhoff, H., Pedrycz, W.: Generalized means as model of compensative connectives. Fuzzy Sets Syst. 14, 143–154 (1984)

    Article  MATH  MathSciNet  Google Scholar 

  37. Lee, C.C., Chung, P.C., Tsai, J.R., Chang, C.I.: Robust radial basis function neural network. IEEE Trans. Syst. Man Cybern. B Cybern. 29(6), 674–685 (1999)

    Google Scholar 

  38. Dubois, D., Prade, H.: A review of fuzzy set aggregation connectives. Inf. Sci. 36(1–2), 85–121 (1985)

    Article  MATH  MathSciNet  Google Scholar 

  39. Kolmogoroff, A.N.: Sur la notion de la moyenne. Acad. Naz. Lincei Mem. Cl. Sci. Fis. Mat. Natur. Sez. 12, 388–391 (1930)

    MATH  Google Scholar 

  40. Nagumo, M.: Uber eine Klasse der Mittelwerte. Japan. J. Math. 7, 71–79 (1930)

    MATH  Google Scholar 

  41. Schmitt, M.: On the complexity of computing and learning with multiplicative neural networks. Neural Comput. 14, 241–301 (2001)

    Google Scholar 

  42. Shiblee, Md., Chandra, B., Kalra, P.K.: New neuron model for blind source separation. In: Proceedings of the International Conference on Neural Information Processing, November 25–28 (2008)

    Google Scholar 

  43. Georgiou, G.M.: Exact interpolation and learning in quadratic neural networks. In Proceedings IJCNN, Vancouver, BC, Canada, July 16–21 (2006)

    Google Scholar 

  44. Foggel, D.B.: An information criterion for optimal neural network selection. IEEE Trans. Neural Netw. 2(5), 490–497 (1991)

    Google Scholar 

  45. Blake, C.L., Merz, C.J.: UCI repository of machine learning database. http://www.ics.uci.edu/mealrn/MLRepository.html. University of California, Department of Information and Computer Science (1998)

  46. Daugman, J.G.: Entropy reduction and decorrelation in visual coding by oriented neural receptive fields. IEEE Trans. Biomed. Eng. 36(1), 107–114 (1989)

    Article  Google Scholar 

  47. Shin, Y., Ghosh, J.: Ridge polynomial networks. IEEE Trans. Neural Netw. 6(3), 610–622 (1995)

    Google Scholar 

  48. Li, C.-K.: A sigma-pi-sigma neural network. Neural Process. Lett. 17, 1–19 (2003)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bipin Kumar Tripathi .

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer India

About this chapter

Cite this chapter

Tripathi, B.K. (2015). Higher-Order Computational Model for Novel Neurons. In: High Dimensional Neurocomputing. Studies in Computational Intelligence, vol 571. Springer, New Delhi. https://doi.org/10.1007/978-81-322-2074-9_4

Download citation

  • DOI: https://doi.org/10.1007/978-81-322-2074-9_4

  • Published:

  • Publisher Name: Springer, New Delhi

  • Print ISBN: 978-81-322-2073-2

  • Online ISBN: 978-81-322-2074-9

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics