Advertisement

Comparing Deep and Dendrite Neural Networks: A Case Study

  • Gerardo Hernández
  • Erik Zamora
  • Humberto SossaEmail author
Conference paper
  • 963 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10267)

Abstract

In this paper, a comparative study between two different neural network models is performed for a very simple type of classificaction problem in 2D. The first model is a deep neural network and the second is a dendrite morphological neuron. The metrics to be compared are: training time, classification accuracies and number of learning parameters. We also compare the decision boundaries generated by both models. The experiments show that the dendrite morphological neurons surpass the deep neural networks by a wide margin in terms of higher accuracies and a lesser number of parameters. From this, we raise the hypothesis that deep learning networks can be improved adding morphological neurons.

Keywords

Gradient Descent Training Time Deep Learning Decision Boundary Deep Neural Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgments

E. Zamora and H. Sossa would like to acknowledge the support provided by UPIITA-IPN and CIC-IPN in carrying out this research. This work was economically supported by SIP-IPN (grant numbers 20170836 and 20170693), and CONACYT grant number 65 (Frontiers of Science). G. Hernández acknowledges CONACYT for the scholarship granted towards pursuing his PhD studies.

References

  1. 1.
    Bengio, Y., Simard, P., Frasconi, P.: Learning long-term dependencies with gradient descent is difficult. Trans. Neur. Netw. 5(2), 157–166 (1994)CrossRefGoogle Scholar
  2. 2.
    Bengio, Y.: Learning deep architectures for AI. Found. Trends Mach. Learn. 2(1), 1–127 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20(3), 273–297 (1995)zbMATHGoogle Scholar
  4. 4.
    Durbin, R., Rumelhart, D.E.: Product units: a computationally powerful and biologically plausible extension to backpropagation networks. Neural Comput. 1(1), 133–142 (1989)CrossRefGoogle Scholar
  5. 5.
    Giles, C.L., Maxwell, T.: Learning, invariance, and generalization in high-order neural networks. Appl. Opt. 26(23), 4972–4978 (1987)CrossRefGoogle Scholar
  6. 6.
    Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. In: Gordon, G.J., Dunson, D.B., Dudik, M. (eds.), AISTATS, vol. 15. JMLR Proceedings, pp. 315–323 (2011). JMLR.org
  7. 7.
    Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press (2016). http://www.deeplearningbook.org
  8. 8.
    Gurney, K.N.: Training nets of hardware realizable sigma-pi units. Neural Networks 5(2), 289–303 (1992)CrossRefGoogle Scholar
  9. 9.
    Hinton, G., Deng, L., Dong, Y., Dahl, G.E., Mohamed, A., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T.N., et al.: Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process. Mag. 29(6), 82–97 (2012)CrossRefGoogle Scholar
  10. 10.
    Hussain, A.: A new neural network structure for temporal signal processing. In: IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 1997, Munich, Germany, 21–24 April, pp. 3341–3344 (1997)Google Scholar
  11. 11.
    Ivakhnenko, A.G.: Polynomial theory of complex systems. IEEE Trans. Syst. Man Cybern. SMC–1(4), 364–378 (1971)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  13. 13.
    LeCun, Y., Bengio, Y.: The handbook of brain theory and neural networks. In: Convolutional Networks for Images, Speech, and Time Series, pp. 255–258. MIT Press, Cambridge (1998)Google Scholar
  14. 14.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)CrossRefGoogle Scholar
  15. 15.
    LeCun, Y., Bottou, L., Orr, G.B., Müller, K.-R.: Efficient backprop. In: Orr, G.B., Müller, K.-R. (eds.) Neural Networks: Tricks of the Trade. LNCS, vol. 1524, pp. 9–50. Springer, Heidelberg (1998). doi: 10.1007/3-540-49430-8_2 CrossRefGoogle Scholar
  16. 16.
    MacKay, D.J.C.: A practical bayesian framework for backpropagation networks. Neural Comput. 4(3), 448–472 (1992)CrossRefGoogle Scholar
  17. 17.
    Milenkovic, Z., Obradovic, S., Litovski, V.: Annealing based dynamic learning in second-order neural networks. In: IEEE International Conference on Neural Networks, vol. 1, pp. 458–463. IEEE (1996)Google Scholar
  18. 18.
    Pessoa, L.F.C., Maragos, P.: Neural networks with hybrid morphological/rank/linear nodes: a unifying framework with applications to handwritten character recognition. Pattern Recogn. 33(6), 945–960 (2000)CrossRefGoogle Scholar
  19. 19.
    Ritter, G.X., Iancu, L., Urcid, G.: Morphological perceptrons with dendritic structure. In: The 12th IEEE International Conference on Fuzzy Systems, FUZZ-IEEE 2003, St. Louis, Missouri, USA, 25–28 May 2003, pp. 1296–1301 (2003)Google Scholar
  20. 20.
    Ritter, G.X., Urcid, G.: Lattice algebra approach to single-neuron computation. IEEE Trans. Neural Networks 14(2), 282–295 (2003)CrossRefGoogle Scholar
  21. 21.
    Ritter, G.X., Urcid, G.: Learning in lattice neural networks that employ dendritic computing. In: Kaburlasos, V.G., Ritter, G.X. (eds.) Computational Intelligence Based on Lattice Theory. SCI, vol. 67, pp. 25–44. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  22. 22.
    Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Parallel distributed processing: Explorations in the microstructure of cognition. In: Learning Internal Representations by Error Propagation, vol. 1, pp. 318–362. MIT Press, Cambridge (1986)Google Scholar
  23. 23.
    Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Networks 61, 85–117 (2015)CrossRefGoogle Scholar
  24. 24.
    Sossa, H., Guevara, E.: Efficient training for dendrite morphological neural networks. Neurocomputing 131, 132–142 (2014)CrossRefGoogle Scholar
  25. 25.
    Van Der Malsburg, C.: Frank Rosenblatt: principles of neurodynamics: perceptrons and the theory of brain mechanisms. In: Palm, G., Aertsen, A. (eds.) Brain Theory. Springer, Heidelberg (1986)Google Scholar
  26. 26.
    Wasserman, P.D., Schwartz, T.J.: Neural networks. II. What are they and why is everybody so interested in them now? IEEE Expert 3(1), 10–15 (1988)CrossRefGoogle Scholar
  27. 27.
    Zamora, E., Sossa,H.: Dendrite morphological neurons trained by stochastic gradient descent. In: IEEE Symposium Series on Computational Intelligence (SSCI), pp. 1–8, December 2016Google Scholar
  28. 28.
    Zamora, E., Sossa, H.: Regularized divide and conquer training for dendrite morphological neurons. In: Mechatronics and Robotics Service: Theory and Applications, Mexican Mechatronics Association, November 2016Google Scholar
  29. 29.
    Zhang, Q., Benveniste, A.: Wavelet networks. IEEE Trans. Neural Networks 3(6), 889–898 (1992)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Gerardo Hernández
    • 1
  • Erik Zamora
    • 2
  • Humberto Sossa
    • 1
    Email author
  1. 1.Instituto Politécnico Nacional - CICMexico CityMexico
  2. 2.Instituto Politécnico Nacional - UPIITAMexico CityMexico

Personalised recommendations