Advertisement

Deep Learning Model and Its Application in Big Data

  • Yuanming Zhou
  • Shifeng Zhao
  • Xuesong Wang
  • Wei Liu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10918)

Abstract

In the era of big data, many of the data that previously seemed hard to collect and use began to be utilized, resulting in an increase of millions of the data to be processed. In order to obtain valuable information, large-scale data can be processed and analyzed using a well-developed deep learning framework. This study introduces the concept of deep learning and three common deep learning models - Multilayer Perceptron, Convolutional Neural Network and Recurrent Neural Network, and analyzes the improvement of the model in dealing with large-scale data and gives the capacity and diversity analysis. Introducing the innovative application of deep learning in various fields under big data. Looking forward to the development of deep learning in the era of big data, the integration of big data and in-depth learning will make breakthroughs in various fields. Through constant innovation, they will gradually create more value for mankind.

Keywords

Big data Deep learning Multilayer Perceptron Convolutional neural network Recurrent neural network 

Notes

Acknowledgements

This work was supported by Beijing Natural Science Foundation (Grant No. 4174094).

References

  1. 1.
    Yu, B., Li, S., et al.: Deep learning: a key of stepping into the era of big data. J. Eng. Stud. 20–45 (2014)Google Scholar
  2. 2.
    Wu, M., Chen, L.: Image recognition based on deep learning. In: Chinese Automation Congress, Wuhan, China, pp. 542–546 (2015)Google Scholar
  3. 3.
    Zhao, Y., Xu, Y.M., Sun, M.J., et al.: Cross-language transfer speech recognition using deep learning. In: Proceedings of the 11th IEEE International Conference of Control & Automation (ICCA), Munich, Germany, pp. 1422–1426 (2014)Google Scholar
  4. 4.
    Wang, J., Chen, H., Liu, Q.: The study of deep learning under big data. Chin. High Technol. Lett. 1, 005 (2017)Google Scholar
  5. 5.
    Hinton, G.E., Osindero, S., Ten, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Bengio, Y., Lamblin, P., Popovici, D., et al.: Greedy layer-wise training of deep networks. In: Advances in Neural Information Processing Systems, pp. 153–160. MIT Press, Cambridge (2007)Google Scholar
  7. 7.
    Zeiler, M.D., Krishnan, D., Taylor, G.W., et al.: Deconvolutional networks. In: 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2528–2535. IEEE, Piscataway (2010)Google Scholar
  8. 8.
    Sagar, G.V.R., Venkata, C.S.: Simultaneous evolution of architecture and connection weights in artificial neural network. Int. J. Comput. Appl. 53(4), 23–28 (2012)Google Scholar
  9. 9.
    Zen, H., Senior, A., Schuster, M.: Statistical parametric speech synthesis using deep neural networks. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 7962–7966. IEEE, Piscataway (2013)Google Scholar
  10. 10.
    Tokuda, K., Yoshimura, T., Masuko, T., et al.: Speech parameter generation algorithms for HMM-based speech synthesis. In: Proceedings of 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing, pp. 1315–1318. IEEE, Piscataway (2000)Google Scholar
  11. 11.
    Cheng, Z.: Research and Application of Large Scale Multi-layer Perceptron Neural Network Jilin University, vol. 6 (2017)Google Scholar
  12. 12.
    Harley, A.W.: An interactive node-link visualization of convolutional neural networks. In: ISVC, pp. 867–877 (2015)CrossRefGoogle Scholar
  13. 13.
    He, K., Zhang, X., Ren, S., et al.: Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1904–1916 (2014)CrossRefGoogle Scholar
  14. 14.
    Simard, P., Steinkraus, D., Platt, J.C.: Best practices for convolutional neural networks applied to visual document analysis. In: Seventh International Conference on Document Analysis and Recognition (ICDAR 2003), vol. 2, pp. 958–962. IEEE (2003)Google Scholar
  15. 15.
    Lecun, Y., BottouL, B.Y., et al.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)CrossRefGoogle Scholar
  16. 16.
    Sun, J., He, K.M., Zhang, X.Y., et al.: Delving deep into rectifiers: surpassing human-level performance on imagenet classification (2015)Google Scholar
  17. 17.
    Satish, N., Sundaram, N., Keutzer, K.: Optimizing the use of GPU memory in applications with large data sets. In: Proceedings of the 16th International Conference on High Performance Computing, Kochi, India, pp. 408–418 (2009)Google Scholar
  18. 18.
    Bengio, Y., Lamblin, P., Popovici, D., et al.: Greedy layer-wise training of deep networks. In: Advances in Neural Information Processing Systems 19, p. 153. Neural Information Processing Systems Foundation, Inc. (2006)Google Scholar
  19. 19.
    Sak, H., Senior, A., Beaufays, F.: Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In: 15th Annual Conference of the International Speech Communication Association (Interspeech 2014), Singapore, pp. 338–342 (2014)Google Scholar
  20. 20.
    Zhang, G., Zhang, P.Y., Pan, J., et al.: Fast decoding algorithm for automatic speech recognition based on recurrent neural networks. J. Electron. Inf. Technol. 4 (2017)Google Scholar
  21. 21.
    Yosuke, O., Akihiro, N.: Predicting statistics of asynchronous SGD parameters for a large-scale distributed deep learning system on GPU supercomputers, pp. 306–331 (2016)Google Scholar
  22. 22.
    Noam, S., Azalia, M., Krzysztof, M., et al.: Outrageously large neural networks: the sparsely-gated mixture-of-experts layer, pp. 204–220 (2017)Google Scholar
  23. 23.
    Ciresan, D.C., Meier, U., Gamarde lla, L.M., et al.: Deep, big, simple neural nets for handwritten digit recognition. Neural Comput. 22(12), 3207–3220 (2010)CrossRefGoogle Scholar
  24. 24.
    Ranzato, M., Poultney, C., Chopra, S., et al.: Efficient learning of sparse representations with an energy-based model. In: Advances in Neural Information Processing Systems Foundations, Inc., pp. 379–420 (2006)Google Scholar
  25. 25.
    Dzmitry, B.,Yoshua, B.: Neural machine translation by jointly learning to align translate. Published as a Conference Paper at ICLR 2015, pp. 167–190 (2015)Google Scholar
  26. 26.
    David, S., Julian, S., Karen, S., et al.: Mastering the game of go without human knowledge. Nature 550, 354–359 (2017)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Yuanming Zhou
    • 1
  • Shifeng Zhao
    • 1
  • Xuesong Wang
    • 1
  • Wei Liu
    • 2
  1. 1.College of Information Science and TechnologyBeijing Normal UniversityBeijingChina
  2. 2.Department of PsychologyBeijing Normal UniversityBeijingChina

Personalised recommendations