Abstract
This chapter covers the use of contextual information across text. With textual work in any form, i.e., speech, text, and print, and in any language, to understand the information provided in it, we try to capture and relate the present and past contexts and aim to gain something meaningful from them. This is because the structure of text creates a link within a sentence and across sentences, just like thoughts, which are persistent throughout.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2018 Palash Goyal, Sumit Pandey, Karan Jain
About this chapter
Cite this chapter
Goyal, P., Pandey, S., Jain, K. (2018). Unfolding Recurrent Neural Networks. In: Deep Learning for Natural Language Processing. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-3685-7_3
Download citation
DOI: https://doi.org/10.1007/978-1-4842-3685-7_3
Published:
Publisher Name: Apress, Berkeley, CA
Print ISBN: 978-1-4842-3684-0
Online ISBN: 978-1-4842-3685-7
eBook Packages: Professional and Applied ComputingApress Access BooksProfessional and Applied Computing (R0)