Skip to main content

Network of Recurrent Neural Networks: Design for Emergence

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2018)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11302))

Included in the following conference series:

Abstract

Emergence plays an important role in Recurrent Neural Networks (RNNs). In order to design for emergence, we qualitatively and quantitatively design the recurrent neural network structures from the perspective of systems theory. From the qualitative viewpoint, we introduce two methodologies (aggregation and specialization) from systems theory to design the novel neural structure, and we name it as “Network Of Recurrent neural networks” (NOR). In NOR, RNNs are viewed as the high-level neurons and are used to build the high-level layers. Experiments on three predictive tasks show that under the same number of parameters, the implemented NOR models get superior performances than conventional RNN structures (e.g., vanilla RNN, LSTM and GRU). More importantly, from the quantitative perspective, we introduce an information-theoretical framework to quantify the information dynamics in recurrent neural structures. And the evaluation results show that several NOR models achieve similar or better emergent information processing capabilities compared with LSTM.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Arthur, W.B.: On the evolution of complexity. In: Cowan, G.A., Pines, D., Meltzer, D.E. (eds.) Complexity: Metaphors, Models, and Reality. Advanced Book Classics, pp. 65–81. Westview Press, Cambridge (1999). Chapter 5

    Google Scholar 

  2. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)

  3. Bertschinger, N., Natschläger, T.: Real-time computation at the edge of chaos in recurrent neural networks. Neural Comput. 16(7), 1413–1436 (2004)

    Article  Google Scholar 

  4. Boedecker, J., Obst, O., Lizier, J.T., Mayer, N.M., Asada, M.: Information processing in echo state networks at the edge of chaos. Theor. Biosci. 131(3), 205–213 (2012)

    Article  Google Scholar 

  5. Cho, K., et al.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 (2014)

  6. Dessalles, J.L., Müller, J.P., Phan, D.: Emergence in multi-agent systems: conceptual and methodological issues. In: Phan, D., Amblard, F. (eds.) Agent-based Modelling and Simulation in the Social and Human Sciences, pp. 327–355. The Bardwell Press, Oxford (2007)

    Google Scholar 

  7. Elman, J.L.: Finding structure in time. Cogn. Sci. 14(2), 179–211 (1990)

    Article  Google Scholar 

  8. Fromm, J.: The Emergence of Complexity. Kassel University Press, Kassel (2004)

    Google Scholar 

  9. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  10. Holland, J.H.: Emergence: From Chaos to Order. OUP, Oxford (2000)

    MATH  Google Scholar 

  11. Huang, Z., Xu, W., Yu, K.: Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991 (2015)

  12. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  13. Le, Q.V., Jaitly, N., Hinton, G.E.: A simple way to initialize recurrent networks of rectified linear units. arXiv preprint arXiv:1504.00941 (2015)

  14. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  15. Lehn, J.M.: Towards complex matter: supramolecular chemistry and self-organization. Eur. Rev. 17(2), 263–280 (2009)

    Article  Google Scholar 

  16. Li, X., Roth, D.: Learning question classifiers. In: Proceedings of the 19th International Conference on Computational Linguistics-Volume 1, pp. 1–7. Association for Computational Linguistics (2002)

    Google Scholar 

  17. Lizier, J.T., Prokopenko, M., Zomaya, A.Y.: A framework for the local information dynamics of distributed computation in complex systems. In: Prokopenko, M. (ed.) Guided Self-Organization: Inception. ECC, vol. 9, pp. 115–158. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-642-53734-9_5

    Chapter  Google Scholar 

  18. Mele, C., Pels, J., Polese, F.: A brief review of systems theories and their managerial applications. Serv. Sci. 2, 126–135 (2010)

    Article  Google Scholar 

  19. Mitchell, M.: Complexity: A guided Tour. Oxford University Press, New York (2009)

    MATH  Google Scholar 

  20. Nicolis, G., Prigogine, I.: Self-organization in Nonequilibrium Systems: From Dissipative Structures to Order Through Fluctuations. Wiley (1977)

    Google Scholar 

  21. Pascanu, R., Mikolov, T., Bengio, Y.: On the difficulty of training recurrent neural networks. In: International Conference on Machine Learning, pp. 1310–1318 (2013)

    Google Scholar 

  22. Pennington, J., Socher, R., Manning, C.: Glove: global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014)

    Google Scholar 

  23. Kárný, M., Warwick, K., Kůrková, V.: Recurrent neural networks: some systems-theoretic aspects. In: Kárnỳ, M., Warwick, K., Kůrková, V. (eds.) Dealing with Complexity, pp. 1–12. Springer, London (1998). https://doi.org/10.1007/978-1-4471-1523-6_1

    Chapter  MATH  Google Scholar 

  24. Tjong Kim Sang, E.F., De Meulder, F.: Introduction to the CoNLL-2003 shared task: language-independent named entity recognition. In: Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL, vol. 4, pp. 142–147. Association for Computational Linguistics (2003)

    Google Scholar 

  25. Von Bertalanffy, L.: General System Theory. G. Braziller, New York (1968)

    Google Scholar 

  26. Zaremba, W., Sutskever, I., Vinyals, O.: Recurrent neural network regularization. arXiv preprint arXiv:1409.2329 (2014)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yi Zeng .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, C., Zeng, Y. (2018). Network of Recurrent Neural Networks: Design for Emergence. In: Cheng, L., Leung, A., Ozawa, S. (eds) Neural Information Processing. ICONIP 2018. Lecture Notes in Computer Science(), vol 11302. Springer, Cham. https://doi.org/10.1007/978-3-030-04179-3_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-04179-3_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-04178-6

  • Online ISBN: 978-3-030-04179-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics