Skip to main content

Learning Sparse Hidden States in Long Short-Term Memory

  • Conference paper
  • First Online:
Artificial Neural Networks and Machine Learning – ICANN 2019: Deep Learning (ICANN 2019)

Abstract

Long Short-Term Memory (LSTM) is a powerful recurrent neural network architecture that is successfully used in many sequence modeling applications. Inside an LSTM unit, a vector called “memory cell” is used to memorize the history. Another important vector, which works along with the memory cell, represents hidden states and is used to make a prediction at a specific step. Memory cells record the entire history, while the hidden states at a specific time step in general need to attend only to very limited information thereof. Therefore, there exists an imbalance between the huge information carried by a memory cell and the small amount of information requested by the hidden states at a specific step. We propose to explicitly impose sparsity on the hidden states to adapt them to the required information. Extensive experiments show that sparsity reduces the computational complexity and improves the performance of LSTM networks (The source code is available at https://github.com/feiyuhug/SHS_LSTM/tree/master).

We acknowledge support by NSFC (61621136008) and German Research Foundation (DFG) under project CML (TRR 169).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/pytorch/examples/tree/master/word_language_model.

References

  1. Anderson, P., et al.: Bottom-up and top-down attention for image captioning and visual question answering. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018). https://doi.org/10.1109/cvpr.2018.00636

  2. Barlow, H.: What is the computational goal of the neocortex? In: Koch, C., Davis, J. (eds.) Large-Scale Neuronal Theories of the Brain, pp. 1–22. The MIT Press, Cambridge (1994)

    Google Scholar 

  3. Campos, V., Jou, B., Giró-i Nieto, X., Torres, J., Chang, S.F.: Skip RNN: learning to skip state updates in recurrent neural networks. arXiv preprint arXiv:1708.06834 (2017)

  4. Chalk, M., Marre, O., Tkačik, G.: Toward a unified theory of efficient, predictive, and sparse coding. Proc. Natl. Acad. Sci. 115(1), 186–191 (2018). https://doi.org/10.1101/152660

    Article  MathSciNet  MATH  Google Scholar 

  5. Cho, K., van Merriënboer, B., Bahdanau, D., Bengio, Y.: On the properties of neural machine translation: encoder-decoder approaches. In: Proceedings of SSST@EMNLP 2014, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pp. 103–111 (2014). https://doi.org/10.3115/v1/w14-4012

  6. Graves, A., Mohamed, A.r., Hinton, G.: Speech recognition with deep recurrent neural networks. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6645–6649. IEEE (2013). https://doi.org/10.1109/icassp.2013.6638947

  7. Han, S., Pool, J., Tran, J., Dally, W.: Learning both weights and connections for efficient neural network. In: Advances in Neural Information Processing Systems, pp. 1135–1143 (2015)

    Google Scholar 

  8. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997). https://doi.org/10.4324/9781315174105-4

    Article  Google Scholar 

  9. Jernite, Y., Grave, E., Joulin, A., Mikolov, T.: Variable computation in recurrent neural networks. arXiv preprint arXiv:1611.06188 (2016)

  10. Karpathy, A., Fei-Fei, L.: Deep visual-semantic alignments for generating image descriptions. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3128–3137 (2015). https://doi.org/10.1109/cvpr.2015.7298932

  11. Lin, J., Rao, Y., Lu, J., Zhou, J.: Runtime neural pruning. In: Advances in Neural Information Processing Systems, pp. 2178–2188 (2017)

    Google Scholar 

  12. Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48

    Chapter  Google Scholar 

  13. Marcus, M.P., Marcinkiewicz, M.A., Santorini, B.: Building a large annotated corpus of English: the penn treebank. Comput. Linguist. 19(2), 313–330 (1993)

    Google Scholar 

  14. McGill, M., Perona, P.: Deciding how to decide: dynamic routing in artificial neural networks. arXiv preprint arXiv:1703.06217 (2017)

  15. Mikolov, T., Karafiát, M., Burget, L., Černockỳ, J., Khudanpur, S.: Recurrent neural network based language model. In: Eleventh Annual Conference of the International Speech Communication Association (2010)

    Google Scholar 

  16. Narang, S., Elsen, E., Diamos, G., Sengupta, S.: Exploring sparsity in recurrent neural networks. arXiv preprint arXiv:1704.05119 (2017)

  17. Olshausen, B.A., Field, D.J.: Sparse coding of sensory inputs. Curr. Opinion Neurobiol. 14(4), 481–487 (2004). https://doi.org/10.1016/j.conb.2004.07.007

    Article  Google Scholar 

  18. Rennie, S.J., Marcheret, E., Mroueh, Y., Ross, J., Goel, V.: Self-critical sequence training for image captioning. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7008–7024 (2017). https://doi.org/10.1109/cvpr.2017.131

  19. Shazeer, N., et al.: Outrageously large neural networks: the sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538 (2017)

  20. Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: Advances in Neural Information Processing Systems, pp. 3104–3112 (2014)

    Google Scholar 

  21. Vinyals, O., Toshev, A., Bengio, S., Erhan, D.: Show and tell: a neural image caption generator. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3156–3164. IEEE (2015). https://doi.org/10.1109/cvpr.2015.7298935

  22. Wen, W., et al.: Learning intrinsic sparse structures within long short-term memory. arXiv preprint arXiv:1709.05027 (2017)

  23. Yu, N., Qiu, S., Hu, X., Li, J.: Accelerating convolutional neural networks by group-wise 2D-filter pruning. In: 2017 International Joint Conference on Neural Networks (IJCNN), pp. 2502–2509. IEEE (2017). https://doi.org/10.1109/ijcnn.2017.7966160

  24. Zaremba, W., Sutskever, I., Vinyals, O.: Recurrent neural network regularization. arXiv preprint arXiv:1409.2329 (2014)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaolin Hu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yu, N., Weber, C., Hu, X. (2019). Learning Sparse Hidden States in Long Short-Term Memory. In: Tetko, I., Kůrková, V., Karpov, P., Theis, F. (eds) Artificial Neural Networks and Machine Learning – ICANN 2019: Deep Learning. ICANN 2019. Lecture Notes in Computer Science(), vol 11728. Springer, Cham. https://doi.org/10.1007/978-3-030-30484-3_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-30484-3_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-30483-6

  • Online ISBN: 978-3-030-30484-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics