Capturing Lexical, Grammatical, and Semantic Information with Vecsigrafo
- 61 Downloads
Embedding algorithms work by optimizing the distance between a word and its context(s), generating an embedding space that encodes their distributional representation. In addition to single words or word pieces, other features, which result from a deeper analysis of the text, can be used to enrich such representations with additional information. Such features are influenced by the tokenization strategy used to chunk the text and can include not only lexical and part-of-speech information but also annotations about the disambiguated sense of a word according to a structured knowledge graph. In this chapter we analyze the impact that explicitly adding lexical, grammatical and semantic information during the training of Vecsigrafo has in the resulting representations and whether or not this can enhance their downstream performance. To illustrate this analysis we focus on corpora from the scientific domain, where rich, multi-word expressions are frequent, hence requiring advanced tokenization strategies.
Unable to display preview. Download preview PDF.