Skip to main content

Deep Learning with Dense Random Neural Networks

  • Conference paper
  • First Online:
Book cover Man-Machine Interactions 5 (ICMMI 2017)

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 659))

Included in the following conference series:

Abstract

We exploit the dense structure of nuclei to postulate that in such clusters, the neuronal cells will communicate via soma-to-soma interactions, aswell as through synapses. Using the mathematical structure of the spiking Random Neural Network, we construct a multi-layer architecture for Deep Learning. An efficient training procedure is proposed for this architecture. It is then specialized to multi-channel datasets, and applied to images and sensor-based data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., et al.: TensorFlow: large-scale machine learning on heterogeneous systems (2015). http://tensorflow.org/

  2. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  3. Bousquet, O., Bottou, L.: The tradeoffs of large scale learning. In: NIPS 2008, pp. 161–168. Vancouver (2008)

    Google Scholar 

  4. Brunel, N., Van Rossum, M.C.: Lapicque’s 1907 paper: from frogs to integrate-and-fire. Biol. Cybern. 97(5–6), 337–339 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  5. Chollet, F.: Keras (2015). https://github.com/fchollet/keras

  6. Ciresan, D.C., Meier, U., Gambardella, L.M., Schmidhuber, J.: Deep big simple neural nets for handwritten digit recognition. Neural Comput. 22, 3207–3220 (2010)

    Article  Google Scholar 

  7. Cramer, C.E., Gelenbe, E.: Video quality and traffic QoS in learning-based subsampled and receiver-interpolated video sequences. IEEE J. Select. Areas Commun. 18(2), 150–167 (2000)

    Article  Google Scholar 

  8. Fonollosa, J., Fernández, L., Gutiérrez-Gálvez, A., Huerta, R., Marco, S.: Calibration transfer and drift counteraction in chemical sensor arrays using direct standardization. Sens. Actuators B: Chem. 236, 1044–1053 (2016)

    Article  Google Scholar 

  9. Gelenbe, E.: Random neural networks with negative and positive signals and product form solution. Neural Comput. 1(4), 502–510 (1989)

    Article  Google Scholar 

  10. Gelenbe, E.: Stability of the random neural network model. Neural Comput. 2(2), 239–247 (1990)

    Article  Google Scholar 

  11. Gelenbe, E.: Steps toward self-aware networks. Commun. ACM 52(7), 66–75 (2009)

    Article  Google Scholar 

  12. Gelenbe, E., Fourneau, J.M.: Random neural networks with multiple classes of signals. Neural Comput. 11(4), 953–963 (1999)

    Article  Google Scholar 

  13. Gelenbe, E., Gellman, M., Lent, R., Liu, P., Su, P.: Autonomous smart routing for network QoS. In: ICAC 2004, Chicago, pp. 232–239 (2004)

    Google Scholar 

  14. Gelenbe, E., Hussain, K.F.: Learning in the multiple class random neural network. IEEE Trans. Neural Netw. 13(6), 1257–1267 (2002)

    Article  Google Scholar 

  15. Gelenbe, E., Iasnogorodski, R.: A queue with server of walking type (autonomous service). Annales de L’Institut Henri Poincaré Section B, Calcul des Probabilités et Statistique 16(1), 63–73 (1980)

    MathSciNet  MATH  Google Scholar 

  16. Gelenbe, E., Koçak, T.: Area-based results for mine detection. IEEE Trans. Geosci. Remote Sens. 38(1), 12–24 (2000)

    Article  Google Scholar 

  17. Gelenbe, E., Koubi, V., Pekergin, F.: Dynamical random neural network approach to the traveling salesman problem. In: IEEE SMC 1993, vol. 2, pp. 630–635 (1993)

    Google Scholar 

  18. Gelenbe, E., Timotheou, S.: Random neural networks with synchronized interactions. Neural Comput. 20(9), 2308–2324 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  19. Gelenbe, E., Timotheou, S.: Synchronized interactions in spiked neuronal networks. Comput. J. 51(6), 723–730 (2008)

    Article  MATH  Google Scholar 

  20. Gelenbe, E., Wu, F.J.: Large scale simulation for human evacuation and rescue. Comput. Math. Appl. 64(12), 3869–3880 (2012)

    Article  Google Scholar 

  21. Gelenbe, E., Yin, Y.: Deep learning with random neural networks. In: IJCNN 2016, pp. 1633–1638 (2016)

    Google Scholar 

  22. Ghalut, T., Larijani, H.: Non-intrusive method for video quality prediction over LTE using random neural networks (RNN). In: CSNDSP 2014, Manchester, pp. 519–524 (2014)

    Google Scholar 

  23. Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  24. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: ICML 2015, pp. 448–456 (2015)

    Google Scholar 

  25. Kasun, L.L.C., Zhou, H., Huang, G.B.: Representational learning with extreme learning machine for big data. IEEE Intell. Syst. 28(6), 31–34 (2013)

    Google Scholar 

  26. Larijani, H., Radhakrishnan, K.: Voice quality in VoIP networks based on random neural networks. In: ICN 2010, Menuires, France, pp. 89–92 (2010)

    Google Scholar 

  27. LeCun, Y., Huang, F.J., Bottou, L.: Learning methods for generic object recognition with invariance to pose and lighting. In: Proceedings of 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2004, vol. 2, pp. II–97–104. IEEE (2004)

    Google Scholar 

  28. Marco, S., Gutierrez-Galvez, A.: Signal and data processing for machine olfaction and chemical sensing: a review. IEEE Sens. J. 12(11), 3189–3214 (2012)

    Article  Google Scholar 

  29. Martínez, M., Morón, A., Robledo, F., Rodríguez-Bocca, P., Cancela, H., Rubino, G.: A GRASP algorithm using RNN for solving dynamics in a P2P live video streaming network. In: HIS 2008, Barcelona, pp. 447–452 (2008)

    Google Scholar 

  30. Mohamed, S., Rubino, G.: A study of real-time packet video quality using random neural networks. IEEE Trans. Circuits Syst. Video Technol. 12(12), 1071–1083 (2002)

    Article  Google Scholar 

  31. Radhakrishnan, K., Larijani, H.: Evaluating perceived voice quality on packet networks using different random neural network architectures. Perform. Eval. 68(4), 347–360 (2011)

    Article  Google Scholar 

  32. Rubino, G., Varela, M.: A new approach for the prediction of end-to-end performance of multimedia streams. In: QEST 2004, pp. 110–119. IEEE, Enschede (2004)

    Google Scholar 

  33. Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  34. Stafylopatis, A., Likas, A.: Pictorial information retrieval using the random neural network. IEEE Trans. Softw. Eng. 18(7), 590–600 (1992)

    Article  Google Scholar 

  35. Tang, J., Deng, C., Huang, G.B.: Extreme learning machine for multilayer perceptron. IEEE Trans. Neural Netw. Learn. Syst. 27(4), 809–821 (2016)

    Article  MathSciNet  Google Scholar 

  36. Theano Development Team: Theano: a Python framework for fast computation of mathematical expressions (2016). http://arxiv.org/abs/1605.02688

  37. Timotheou, S.: The random neural network: a survey. Comput. J. 53(3), 251–267 (2010)

    Article  MATH  Google Scholar 

  38. Yin, Y., Zhang, Y.: Weights and structure determination of Chebyshev-polynomial neural networks for pattern classification. Software 11, 48 (2012)

    Google Scholar 

  39. Zeiler, M.D.: ADADELTA: an adaptive learning rate method (2012). http://arxiv.org/abs/1212.5701

  40. Zhang, Y., Yin, Y., Guo, D., Yu, X., Xiao, L.: Cross-validation based weights and structure determination of Chebyshev-polynomial neural networks for pattern classification. Pattern Recogn. 47(10), 3414–3428 (2014)

    Article  Google Scholar 

  41. Zhang, Y., Yin, Y., Yu, X., Guo, D., Xiao, L.: Pruning-included weights and structure determination of 2-input neuronet using Chebyshev polynomials of Class 1. In: WCICA 2012, Beijing, China, pp. 700–705 (2012)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Erol Gelenbe .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG

About this paper

Cite this paper

Gelenbe, E., Yin, Y. (2018). Deep Learning with Dense Random Neural Networks. In: Gruca, A., Czachórski, T., Harezlak, K., Kozielski, S., Piotrowska, A. (eds) Man-Machine Interactions 5. ICMMI 2017. Advances in Intelligent Systems and Computing, vol 659. Springer, Cham. https://doi.org/10.1007/978-3-319-67792-7_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-67792-7_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-67791-0

  • Online ISBN: 978-3-319-67792-7

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics