Skip to main content

A Targeted Retraining Scheme of Unsupervised Word Embeddings for Specific Supervised Tasks

  • Conference paper
  • First Online:
Advances in Knowledge Discovery and Data Mining (PAKDD 2017)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10235))

Included in the following conference series:

Abstract

This paper proposes a simple retraining scheme to purposefully adjust unsupervised word embeddings for specific supervised tasks, such as sentence classification. Different from the current methods, which fine-tune word embeddings in training set through the supervised learning procedure, our method treats the labels of task as implicit context information to retrain word embeddings, so that every required word for the intended task obtains task-specific representation. Moreover, because our method is independent of the supervised learning process, it has less risk of over-fitting. We have validated the rationality of our method on various sentence classification tasks. The improvements of accuracy are remarkable, when only scarce training set is available.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Words exist in vocabulary but merely present on prediction corpus.

  2. 2.

    http://radimrehurek.com/gensim/models/word2vec.html.

  3. 3.

    https://github.com/yoonkim/CNN_sentence.

  4. 4.

    https://code.google.com/archive/p/word2vec/.

  5. 5.

    http://nlp.stanford.edu/projects/glove/.

References

  1. Astudillo, R.F., Amir, S., Lin, W., Silva, M., Trancoso, I.: Learning word representations from scarce and noisy data with embedding sub-spaces. In: Proceedings of the Association for Computational Linguistics (ACL), Beijing, China (2015)

    Google Scholar 

  2. Bansal, M., Gimpel, K., Livescu, K.: Tailoring continuous word representations for dependency parsing. In: ACL, vol. 2, pp. 809–815 (2014)

    Google Scholar 

  3. Bengio, Y., Schwenk, H., Senécal, J.S., Morin, F., Gauvain, J.L.: Neural probabilistic language models. In: Holmes, D.E., Jain, L.C. (eds.) Innovations in Machine Learning, pp. 137–186. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  4. Chen, Y., Xu, L., Liu, K., Zeng, D., Zhao, J.: Event extraction via dynamic multi-pooling convolutional neural networks. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. vol. 1, pp. 167–176 (2015)

    Google Scholar 

  5. Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., Kuksa, P.: Natural language processing (almost) from scratch. J. Mach. Learn. Res. 12, 2493–2537 (2011)

    MATH  Google Scholar 

  6. Kalchbrenner, N., Grefenstette, E., Blunsom, P.: A convolutional neural network for modelling sentences. arXiv preprint arXiv:1404.2188 (2014)

  7. Kim, Y.: Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882 (2014)

  8. Labutov, I., Lipson, H.: Re-embedding words. In: ACL, vol. 2, pp. 489–493 (2013)

    Google Scholar 

  9. Le, Q.V., Mikolov, T.: Distributed representations of sentences and documents. arXiv preprint arXiv:1405.4053 (2014)

  10. Li, R., Shindo, H.: Distributed document representation for document classification. In: Cao, T., Lim, E.-P., Zhou, Z.-H., Ho, T.-B., Cheung, D., Motoda, H. (eds.) PAKDD 2015. LNCS (LNAI), vol. 9077, pp. 212–225. Springer, Cham (2015). doi:10.1007/978-3-319-18038-0_17

    Google Scholar 

  11. Li, X., Roth, D.: Learning question classifiers. In: Proceedings of the 19th International Conference on Computational Linguistics-Volume 1, pp. 1–7. Association for Computational Linguistics (2002)

    Google Scholar 

  12. Maas, A.L., Daly, R.E., Pham, P.T., Huang, D., Ng, A.Y., Potts, C.: Learning word vectors for sentiment analysis. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pp. 142–150. Association for Computational Linguistics (2011)

    Google Scholar 

  13. Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems, pp. 3111–3119 (2013)

    Google Scholar 

  14. Pang, B., Lee, L.: Seeing stars: exploiting class relationships for sentiment categorization with respect to rating scales. In: Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pp. 115–124. Association for Computational Linguistics (2005)

    Google Scholar 

  15. Pennington, J., Socher, R., Manning, C.D.: GloVe: global vectors for word representation. In: EMNLP, vol. 14, pp. 1532–1543 (2014)

    Google Scholar 

  16. Ren, Y., Zhang, Y., Zhang, M., Ji, D.: Improving twitter sentiment classification using topic-enriched multi-prototype word embeddings. In: Thirtieth AAAI Conference on Artificial Intelligence (2016)

    Google Scholar 

  17. dos Santos, C.N., Xiang, B., Zhou, B.: Classifying relations by ranking with convolutional neural networks. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, vol. 1, pp. 626–634 (2015)

    Google Scholar 

  18. Socher, R., Perelygin, A., Wu, J.Y., Chuang, J., Manning, C.D., Ng, A.Y., Potts, C.: Recursive deep models for semantic compositionality over a sentiment treebank. In: Proceedings of the conference on Empirical Methods in Natural Language Processing (EMNLP), vol. 1631, p. 1642. Citeseer (2013)

    Google Scholar 

  19. Tang, D., Wei, F., Yang, N., Zhou, M., Liu, T., Qin, B.: Learning sentiment-specific word embedding for twitter sentiment classification. In: ACL, vol. 1, pp. 1555–1565 (2014)

    Google Scholar 

  20. Xu, K., Feng, Y., Huang, S., Zhao, D.: Semantic relation classification via convolutional neural networks with simple negative sampling. arXiv preprint arXiv:1506.07650 (2015)

  21. Yang, H., Hu, Q., He, L.: Learning topic-oriented word embedding for query classification. In: Cao, T., Lim, E.-P., Zhou, Z.-H., Ho, T.-B., Cheung, D., Motoda, H. (eds.) PAKDD 2015. LNCS (LNAI), vol. 9077, pp. 188–198. Springer, Cham (2015). doi:10.1007/978-3-319-18038-0_15

    Google Scholar 

  22. Zeng, D., Liu, K., Lai, S., Zhou, G., Zhao, J., et al.: Relation classification via convolutional deep neural network. In: COLING, pp. 2335–2344 (2014)

    Google Scholar 

  23. Zhang, M., Liu, Y., Luan, H., Sun, M., Izuha, T., Hao, J.: Building earth movers distance on bilingual word embeddings for machine translation. In: Thirtieth AAAI Conference on Artificial Intelligence (2016)

    Google Scholar 

  24. Taghipour, K., Ng, H.T.: Semi-supervised word sense disambiguation using word embeddings in general and specific domains. In: Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 314–323 (2015)

    Google Scholar 

  25. Yin, Y., Wei, F., Dong, L., Xu, K., Zhang, M., Zhou, M.: Unsupervised word and dependency path embeddings for aspect term extraction (2016)

    Google Scholar 

Download references

Acknowledgments

This work was supported by 111 Project of China under Grant no. B08004, the National Natural Science Foundation of China (61273217, 61300080), the Ph.D. Programs Foundation of Ministry of Education of China (20130005110004).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Weiran Xu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Qin, P., Xu, W., Guo, J. (2017). A Targeted Retraining Scheme of Unsupervised Word Embeddings for Specific Supervised Tasks. In: Kim, J., Shim, K., Cao, L., Lee, JG., Lin, X., Moon, YS. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2017. Lecture Notes in Computer Science(), vol 10235. Springer, Cham. https://doi.org/10.1007/978-3-319-57529-2_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-57529-2_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-57528-5

  • Online ISBN: 978-3-319-57529-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics