Abstract
We exploit the dense structure of nuclei to postulate that in such clusters, the neuronal cells will communicate via soma-to-soma interactions, aswell as through synapses. Using the mathematical structure of the spiking Random Neural Network, we construct a multi-layer architecture for Deep Learning. An efficient training procedure is proposed for this architecture. It is then specialized to multi-channel datasets, and applied to images and sensor-based data.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., et al.: TensorFlow: large-scale machine learning on heterogeneous systems (2015). http://tensorflow.org/
Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)
Bousquet, O., Bottou, L.: The tradeoffs of large scale learning. In: NIPS 2008, pp. 161–168. Vancouver (2008)
Brunel, N., Van Rossum, M.C.: Lapicque’s 1907 paper: from frogs to integrate-and-fire. Biol. Cybern. 97(5–6), 337–339 (2007)
Chollet, F.: Keras (2015). https://github.com/fchollet/keras
Ciresan, D.C., Meier, U., Gambardella, L.M., Schmidhuber, J.: Deep big simple neural nets for handwritten digit recognition. Neural Comput. 22, 3207–3220 (2010)
Cramer, C.E., Gelenbe, E.: Video quality and traffic QoS in learning-based subsampled and receiver-interpolated video sequences. IEEE J. Select. Areas Commun. 18(2), 150–167 (2000)
Fonollosa, J., Fernández, L., Gutiérrez-Gálvez, A., Huerta, R., Marco, S.: Calibration transfer and drift counteraction in chemical sensor arrays using direct standardization. Sens. Actuators B: Chem. 236, 1044–1053 (2016)
Gelenbe, E.: Random neural networks with negative and positive signals and product form solution. Neural Comput. 1(4), 502–510 (1989)
Gelenbe, E.: Stability of the random neural network model. Neural Comput. 2(2), 239–247 (1990)
Gelenbe, E.: Steps toward self-aware networks. Commun. ACM 52(7), 66–75 (2009)
Gelenbe, E., Fourneau, J.M.: Random neural networks with multiple classes of signals. Neural Comput. 11(4), 953–963 (1999)
Gelenbe, E., Gellman, M., Lent, R., Liu, P., Su, P.: Autonomous smart routing for network QoS. In: ICAC 2004, Chicago, pp. 232–239 (2004)
Gelenbe, E., Hussain, K.F.: Learning in the multiple class random neural network. IEEE Trans. Neural Netw. 13(6), 1257–1267 (2002)
Gelenbe, E., Iasnogorodski, R.: A queue with server of walking type (autonomous service). Annales de L’Institut Henri Poincaré Section B, Calcul des Probabilités et Statistique 16(1), 63–73 (1980)
Gelenbe, E., Koçak, T.: Area-based results for mine detection. IEEE Trans. Geosci. Remote Sens. 38(1), 12–24 (2000)
Gelenbe, E., Koubi, V., Pekergin, F.: Dynamical random neural network approach to the traveling salesman problem. In: IEEE SMC 1993, vol. 2, pp. 630–635 (1993)
Gelenbe, E., Timotheou, S.: Random neural networks with synchronized interactions. Neural Comput. 20(9), 2308–2324 (2008)
Gelenbe, E., Timotheou, S.: Synchronized interactions in spiked neuronal networks. Comput. J. 51(6), 723–730 (2008)
Gelenbe, E., Wu, F.J.: Large scale simulation for human evacuation and rescue. Comput. Math. Appl. 64(12), 3869–3880 (2012)
Gelenbe, E., Yin, Y.: Deep learning with random neural networks. In: IJCNN 2016, pp. 1633–1638 (2016)
Ghalut, T., Larijani, H.: Non-intrusive method for video quality prediction over LTE using random neural networks (RNN). In: CSNDSP 2014, Manchester, pp. 519–524 (2014)
Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)
Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: ICML 2015, pp. 448–456 (2015)
Kasun, L.L.C., Zhou, H., Huang, G.B.: Representational learning with extreme learning machine for big data. IEEE Intell. Syst. 28(6), 31–34 (2013)
Larijani, H., Radhakrishnan, K.: Voice quality in VoIP networks based on random neural networks. In: ICN 2010, Menuires, France, pp. 89–92 (2010)
LeCun, Y., Huang, F.J., Bottou, L.: Learning methods for generic object recognition with invariance to pose and lighting. In: Proceedings of 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2004, vol. 2, pp. II–97–104. IEEE (2004)
Marco, S., Gutierrez-Galvez, A.: Signal and data processing for machine olfaction and chemical sensing: a review. IEEE Sens. J. 12(11), 3189–3214 (2012)
Martínez, M., Morón, A., Robledo, F., Rodríguez-Bocca, P., Cancela, H., Rubino, G.: A GRASP algorithm using RNN for solving dynamics in a P2P live video streaming network. In: HIS 2008, Barcelona, pp. 447–452 (2008)
Mohamed, S., Rubino, G.: A study of real-time packet video quality using random neural networks. IEEE Trans. Circuits Syst. Video Technol. 12(12), 1071–1083 (2002)
Radhakrishnan, K., Larijani, H.: Evaluating perceived voice quality on packet networks using different random neural network architectures. Perform. Eval. 68(4), 347–360 (2011)
Rubino, G., Varela, M.: A new approach for the prediction of end-to-end performance of multimedia streams. In: QEST 2004, pp. 110–119. IEEE, Enschede (2004)
Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)
Stafylopatis, A., Likas, A.: Pictorial information retrieval using the random neural network. IEEE Trans. Softw. Eng. 18(7), 590–600 (1992)
Tang, J., Deng, C., Huang, G.B.: Extreme learning machine for multilayer perceptron. IEEE Trans. Neural Netw. Learn. Syst. 27(4), 809–821 (2016)
Theano Development Team: Theano: a Python framework for fast computation of mathematical expressions (2016). http://arxiv.org/abs/1605.02688
Timotheou, S.: The random neural network: a survey. Comput. J. 53(3), 251–267 (2010)
Yin, Y., Zhang, Y.: Weights and structure determination of Chebyshev-polynomial neural networks for pattern classification. Software 11, 48 (2012)
Zeiler, M.D.: ADADELTA: an adaptive learning rate method (2012). http://arxiv.org/abs/1212.5701
Zhang, Y., Yin, Y., Guo, D., Yu, X., Xiao, L.: Cross-validation based weights and structure determination of Chebyshev-polynomial neural networks for pattern classification. Pattern Recogn. 47(10), 3414–3428 (2014)
Zhang, Y., Yin, Y., Yu, X., Guo, D., Xiao, L.: Pruning-included weights and structure determination of 2-input neuronet using Chebyshev polynomials of Class 1. In: WCICA 2012, Beijing, China, pp. 700–705 (2012)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG
About this paper
Cite this paper
Gelenbe, E., Yin, Y. (2018). Deep Learning with Dense Random Neural Networks. In: Gruca, A., Czachórski, T., Harezlak, K., Kozielski, S., Piotrowska, A. (eds) Man-Machine Interactions 5. ICMMI 2017. Advances in Intelligent Systems and Computing, vol 659. Springer, Cham. https://doi.org/10.1007/978-3-319-67792-7_1
Download citation
DOI: https://doi.org/10.1007/978-3-319-67792-7_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-67791-0
Online ISBN: 978-3-319-67792-7
eBook Packages: EngineeringEngineering (R0)