Advertisement

Neurocomputing in Complex Domain

  • Bipin Kumar Tripathi
Chapter
Part of the Studies in Computational Intelligence book series (SCI, volume 571)

Abstract

There are many areas of applications which involve signals that are inherently complex-valued. The characteristics of these applications can be effectively realized if they are operated with the complex-valued neural networks (CVNNs). Apart from that it is also widely observed in researches that the real-valued problems can be solved far efficiently if they are represented and operated in the complex domain. Therefore, CVNNs have emerged a very good alternative in second generation of neurocomputing. The CVNNs to preserve and process the data (signals) in the complex domain itself are gaining more attention over their real-valued counterparts. The use of neural networks is naturally accompanied by the trade-off between issues such as the overfitting, generalization capability, local minima problems, and stability of the weight-update system. The main obstacle in the development of a complex-valued neural network (CVNN) and its learning algorithm is the selection of an appropriate activation function and error function (EF). It can be said that the suitable error function-based training scheme with a proper choice of activation function can substantially decrease the epochs and improve the generalization ability for the problem in question. This chapter presents prominent functions as a basis for making these choices and designing a learning scheme. The choice of EF and activation function in the training scheme also circumvents some of the existing lacunae such as error getting stuck and not progressing below a certain value. This chapter further introduces a novel approach to improve resilient propagation in complex domain for fast learning.

Keywords

Activation Function Error Function Complex Domain Real Domain Absolute Function 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Hirose, A.: Complex-Valued Neural Networks. Springer, New York (2006)CrossRefMATHGoogle Scholar
  2. 2.
    Nitta, T.: Orthogonality of decision boundaries in complex-valued neural networks. Neural Comput. 16(1), 73–97 (2004)CrossRefMATHGoogle Scholar
  3. 3.
    Aizenberg, I., Moraga, C.: Multilayer feedforward neural network based on multi-valued neurons (MLMVN) and a back-propagation learning algorithm. Soft Comput. 11(2), 169–183 (2007)Google Scholar
  4. 4.
    Amin, M.F., Murase, K.: Single-layered complex-valued neural network for real-valued classification problems. Neurocomputing 72(4–6), 945–955 (2009)CrossRefGoogle Scholar
  5. 5.
    Savitha, R., Suresh, S., Sundararajan, N., Kim, H.J.: Fast learning fully complex-valued classifiers for real-valued classification problems. In: Liu, D., et al. (eds.) ISNN 2011, Part I, Lecture Notes in Computer Science (LNCS), vol. 6675, pp. 602–609 (2011)Google Scholar
  6. 6.
    Savitha, R., Suresh, S., Sundararajan, N., Kim, H.J.: A fully complex-valued radial basis function classifier for real-valued classification. Neurocomputing 78(1), 104–110 (2012)Google Scholar
  7. 7.
    Nitta, T.: An extension of the back-propagation algorithm to complex numbers. Neural Netw. 10(8), 1391–1415 (1997)CrossRefGoogle Scholar
  8. 8.
    Tripathi, B.K., Kalra, P.K.: Complex generalized-mean neuron model and its applications. Appl. Soft Comput. (Elsevier Science) 11(1), 768–777 (2011)Google Scholar
  9. 9.
    Nitta, T.: An analysis of the fundamental structure of complex-valued neurons. Neural Process. Lett. 12, 239–246 (2000)CrossRefMATHGoogle Scholar
  10. 10.
    Savitha, R., Suresh, S., Sundararajan, N., Saratchandran, P.: A new learning algorithm with logarithmic performance index for complex- valued neural networks. Neurocomputing 72(16–18), 3771–3781 (2009)CrossRefGoogle Scholar
  11. 11.
    Kim, T., Adali, T.: Approximation by fully complex multilayer perceptrons. Neural Comput. 15, 1641–1666 (2003)CrossRefMATHGoogle Scholar
  12. 12.
    Savitha, R., Suresh, S., Sundararajan, N.: A fully complex-valued radial basis function network and its learning algorithm. Int. J. Neural Syst. 19(4), 253–267 (2009)CrossRefGoogle Scholar
  13. 13.
    Amin, M.F., Islam, M.M., Murase, K.: Ensemble of single-layered complex-valued neural networks for classification tasks. Neurocomputing 72(10–12), 2227–2234 (2009)CrossRefGoogle Scholar
  14. 14.
    Li, M.-B., Huang, G.-B., Saratchandran, P., Sundararajan, N.: Fully complex extreme learning machine. Neurocomputing 68, 306–314 (2005)Google Scholar
  15. 15.
    Brown, J.W., Churchill, R.V.: Complex Variables and Applications, \({\rm {VII}}\)th edn. Mc Graw Hill, New York (2003)Google Scholar
  16. 16.
    Saff, E.B., Snider, A.D.: Fundamentals of Complex Analysis with Applications to Engineering and Science. Prentice Hall, Englewood Cliffs (2003)Google Scholar
  17. 17.
    Piazza, F., Benvenuto, N.: On the complex backpropagation algorithm. IEEE Trans. Sig. Proc. 40(4), 967–969 (1992)Google Scholar
  18. 18.
    Tripathi, B.K., Kalra, P.K.: The novel aggregation function based neuron models in complex domain. Soft Comput. (Springer) 14(10), 1069–1081 (2010)Google Scholar
  19. 19.
    Georgiou, G.M., Koutsougeras, C.: Complex domain backpropagation. IEEE Trans. Circuits Systems-II: Analog Digital Signal Proc. 39(5), 330–334 (1992)Google Scholar
  20. 20.
    Widrow, B., McCool, J., Ball, M.: The complex LMS algorithm. Proc. IEEE 63(4), 719–720 (1975)CrossRefGoogle Scholar
  21. 21.
    Leung, H., Haykin, S.: The complex backpropagation algorithm. IEEE Trans. Signal Proc. 39(9) (1991)Google Scholar
  22. 22.
    Tripathi, B.K., Kalra, P.K.: Functional mapping with complex higher order compensatory neuron model. In: World Congress on Computational Intelligence (WCCI-2010). ISSN: 1098–7576. IEEE Xplore, Barcelona, Spain, 18–23 July 2010Google Scholar
  23. 23.
    Kim, M.S., Guest, C.C.: Modification of backpropagation networks for complex-valued signal processing in frequency domain. In: Proceedings of IJCNN, San Diego (1990)Google Scholar
  24. 24.
    Tripathi, B.K., Kalra, P.K.: On efficient learning machine with root power mean neuron in complex domain. IEEE Trans. Neural Netw. 22(05), 727–738, ISSN: 1045–9227 (2011)Google Scholar
  25. 25.
    Mandic, D., Goh, V.S.L.: Complex valued nonlinear adaptive filters: noncircularity, widely linear and neural models. Wiley, Hoboken (2009)CrossRefGoogle Scholar
  26. 26.
    Shin, Y., Jin, K.-S., Yoon, B.-M.: A complex pi-sigma network and its application to equalization of nonlinear satellite channels. In: IEEE International Conference on Neural Networks (1997)Google Scholar
  27. 27.
    Igel, C., Husken, M.: Empirical evaluation of the improved Rprop learning algorithms. Neurocomputing 50, 105–123 (2003)Google Scholar
  28. 28.
    Van Ooyen, A., Nienhuis, B.: Improving the convergence of the back-propagation algorithm. Neural Netw. 5(3), 465–472 (1992)CrossRefGoogle Scholar
  29. 29.
    Chen, L., Zhang, L., Zhang, H., Abdel-Mottaleb, M.: 3D Shape constraint for facial feature localization using probabilistic-like output. In: Proceeding of 6th IEEE International Conference on Automatic Face and Gesture Recognition (2004)Google Scholar
  30. 30.
    Jacobs, R.A.: Increased rates of convergence through learning rate adaptation. Neural Netw. 1(4), 295–307 (1988)CrossRefGoogle Scholar
  31. 31.
    Fahlman, S.E.: An empirical study of learning speed in back-propagation networks. In: Technical report, CMU-CS-88-162 (1988)Google Scholar
  32. 32.
    Riedmiller, M., Braun, H.: A direct adaptive method for faster back-propagation learning: the RPROP algorithm. In: Proceeding of IEEE ICNN’93, San Francisco, pp. 586–591 (1993)Google Scholar

Copyright information

© Springer India 2015

Authors and Affiliations

  1. 1.Computer Science and EngineeringHarcourt Butler Technological InstituteKanpurIndia

Personalised recommendations