Abstract
This chapter revisits language processing, this time equipped with deep learning. Recurrent neural networks and autoencoders are needed for this chapter, but the exposition is clear and uses them mainly in a conceptual rather than computational sense. The idea of word embeddings is explored and the main deep learning method for representing text, the neural word embedding is described with the famous Word2vec algorithm, in both the Skip-gram and CBOW variant. A CBOW Word2vec architecture is explored in detail and presented in Python code. This code presupposes a text as a list, but this code was written and explained in the previous chapters, and PCA is used in the code to reduce the dimensionality of the vectors to enable an easy display. The chapter concludes with word analogies and simple calculations that can be done and form the basis of analogical reasoning, a simple reasoning calculus that is neural all the way with no symbolic manipulation used.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
If the context were 2, it would take 4 words, two before the main word and two after.
- 2.
If we were to save and load from a H5 file, we would be saving ans loading all the weights in a new network of the same configuration, possibly fine-tuning them and then taking out just the weight matrix with the same code we used here.
- 3.
More precisely: to transform the matrix into a decorrelated matrix whose columns are arranged in descending variance and then keep the first two columns.
References
R.W. Hamming, Error detecting and error correcting codes. Bell Syst. Tech. J. 29(2), 147–160 (1950)
V.I. Levenshtein, Binary codes capable of correcting deletions, insertions, and reversals. Sov. Phys. Dokl. 10(8), 707–710 (1966)
M.A. Jaro, Advances in record linkage methodology as applied to the 1985 census of tampa florida. J. Am. Stat. Assoc. 84(406), 414–420 (1989)
W.E. Winkler, String comparator metrics and enhanced decision rules in the fellegi-sunter model of record linkage, in Proceedings of the Section on Survey Research Methods (American Statistical Association, 1990), pp. 354–359
A. Singhal, Modern information retrieval: a brief overview. Bull. IEEE Comput. Soc. Tech. Comm. Data Eng. 24(4), 35–43 (2001)
T. Mikolov, T. Chen, G. Corrado, J. Dean, Efficient estimation of word representations in vector space, in ICLR Workshop (2013), arXiv:1301.3781
Z. Harris, Distributional structure. Word 10(23), 146–162 (1954)
J.R. Firth, A synopsis of linguistic theory 1930–1955, in Studies in Linguistic Analysis (Philological Society, 1957), pp. 1–32
L. Wittgenstein, Philosophical Investigations (MacMillan Publishing Company, London, 1953)
H. Moravec, Mind Children: The Future of Robot and Human Intelligence (Harvard University Press, Cambridge, 1988)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG
About this chapter
Cite this chapter
Skansi, S. (2018). Neural Language Models. In: Introduction to Deep Learning. Undergraduate Topics in Computer Science. Springer, Cham. https://doi.org/10.1007/978-3-319-73004-2_9
Download citation
DOI: https://doi.org/10.1007/978-3-319-73004-2_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-73003-5
Online ISBN: 978-3-319-73004-2
eBook Packages: Computer ScienceComputer Science (R0)