Abstract
Artificial intelligence includes machine learning which further includes deep learning. Deep learning has humongous applications with extremely high accuracy. But deep learning models work for specific tasks only. Even running time is high from hours to days. They lack flexibility. They have no generic intelligence. They are computationally intensive. The focus of this paper is to train training data on recursive parallel processors (or supercomputers) for deep learning. This trained data would be helpful in fulfilling the mentioned challenges along with handling various application domains.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Chen W, Lin W (2014) Big data deep learning: challenges and perspectives. IEEE Access 2:514–525
Lakshmanaprabu SK, Shankar K et al (2019) Random forest for big data classification in the internet of things using optimal features. Int J Mach Learn Cybern 10(10):2609–2618
Cook D (2017) Practical machine learning with H2O. Shroff Publishers and Distributors
Mueller JP et al (2016) Machine learning in python and R for dummies. Wiley India, New Delhi
Buduma N (2017) Fundamentals of deep learning using H2O. Shroff Publishers and Distributors, Bengaluru
Keuper J, Pfreundt F-J (2016) Distributed training of deep neural networks: theoretical and practical limits of parallel scalability
Hegde V, Usmani S (2016) Parallel and distributed deep learning
Li KLX, Zhang G, Zheng W. Deep learning and its parallelization
Seide F, Fu H, Droppo J, Li G, Yu D (2014) On parallelizability of stochastic gradient descent for speech DNNs. IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, pp 235–239
Schmidhuber J (2015) Deep learning in neural networks—an overview, pp 85–117
Demmel J, Eliahu D, Fox A, Kamil S, Lipshitz B, Schwartz O, Spillinger O (2013) Communication-optimal parallel recursive rectangular matrix multiplication. In: IEEE 27th international symposium on parallel & distributed processing (IPDPS). IEEE, pp 261–272
Spring R, Shrivastava A (2016) Scalable and sustainable deep learning via randomized hashing. arXiv preprint arXiv
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Raheja, S., Chopra, R. (2020). Training Data on Recursive Parallel Processors for Deep Learning. In: Jain, V., Chaudhary, G., Taplamacioglu, M., Agarwal, M. (eds) Advances in Data Sciences, Security and Applications. Lecture Notes in Electrical Engineering, vol 612. Springer, Singapore. https://doi.org/10.1007/978-981-15-0372-6_5
Download citation
DOI: https://doi.org/10.1007/978-981-15-0372-6_5
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-0371-9
Online ISBN: 978-981-15-0372-6
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)