Advertisement

Deep Learning Methods in Natural Language Processing

  • Alexis Stalin Alulema FloresEmail author
Conference paper
  • 47 Downloads
Part of the Communications in Computer and Information Science book series (CCIS, volume 1194)

Abstract

The purpose of this paper is to make a concise description of the current deep learning methods for natural language processing (NLP) and discusses their advantages and disadvantages. The research further discusses the applicability of each deep learning method in the context of natural language processing. Additionally, a series of significant advances that have driven the processing, understanding, and generation of natural language are also discussed.

Keywords

Natural language processing (NLP) Deep learning Neural networks 

References

  1. 1.
  2. 2.
    McCormick, C.: Word2vec Tutorial - The Skip-Gram Model. http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/
  3. 3.
    Olah, C.: Understanding LSTM Networks. https://colah.github.io/posts/2015-08-Understanding-LSTMs/
  4. 4.
    Olah, C.: Neural Networks, Types, and Functional Programming (2015). http://colah.github.io/posts/2015-09-NN-Types-FP/
  5. 5.
    Prabhu, R.: Understanding of Convolutional Neural Network (CNN) – Deep Learning (2018). https://medium.com/@RaghavPrabhu/understanding-of-convolutional-neural-network-cnn-deep-learning-99760835f148
  6. 6.
    Sugomori, Y., Kaluža, B., Soares, F.M., Souza, A.M.F.: Deep Learning - Practical Neural Networks with Java. Packt Publishing, Birmingham (2017). Chapter 4Google Scholar
  7. 7.
    Britz, D.: Understanding Convolutional Neural Network for NLP. http://www.wildml.com/2015/11/understanding-convolutional-neural-networks-for-nlp/
  8. 8.
    Jurafsky, D., Martin, J.H.: N-gram Language Models (2019). https://web.stanford.edu/~jurafsky/slp3/3.pdf
  9. 9.
    Howard, J., Ruder, S.: Introducing state of the art text classification with universal language models. http://nlp.fast.ai/classification/2018/05/15/introducing-ulmfit.html
  10. 10.
    Eisenschlos, J., Ruder, S., Czapla, P., Kardas, M.: Efficient multi-lingual language model fine-tuning. http://nlp.fast.ai/classification/2019/09/10/multifit.html
  11. 11.
    Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT - Pre-training of Deep Bidirectional Transformers for Language Understanding. Google AI (2019)Google Scholar
  12. 12.
    Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language Models are Unsupervised Multitask Learners (2019). https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf
  13. 13.
    Abigail: Taming Recurrent Neural Networks for Better Summarization (2017). http://www.abigailsee.com/2017/04/16/taming-rnns-for-better-summarization.html
  14. 14.
    Thiruvengadam, A.: Transformer Architecture: Attention Is All You Need (2018). https://medium.com/@adityathiruvengadam/transformer-architecture-attention-is-all-you-need-aeccd9f50d09
  15. 15.
    Rudes, S.: Tracking Progress in Natural Language Processing (2019). https://nlpprogress.com/
  16. 16.
    Saravia, E.: Deep Learning for NLP, An Overview of Recent Trends (2018). https://medium.com/dair-ai/deep-learning-for-nlp-an-overview-of-recent-trends-d0d8f40a776d

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.University of New MexicoAlbuquerqueUSA
  2. 2.Number8LouisvilleUSA

Personalised recommendations