Advertisement

Scientific Keyphrase Extraction: Extracting Candidates with Semi-supervised Data Augmentation

  • Qianying Liu
  • Daisuke Kawahara
  • Sujian Li
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11221)

Abstract

Keyphrase extraction can provide effective ways of organizing scientific documents. For this task, neural-based methods usually suffer from performance unstability due to data scarcity. In this paper, we adopt the pipeline two-step method including candidate extraction and keyphrase ranking, where candidate extraction is a key to influence the whole performance. In the candidate extraction step, to overcome the low-recall problem of traditional rule-based method, we propose a novel semi-supervised data augmentation method, where a neural-based tagging model and a discriminative classifier boost each other and get more confident phrases as candidates. With more reasonable candidates, keyphrase are identified with recall promoted. Experiments on SemEval 2017 Task 10 show that our model can achieve competitive results.

Keywords

Keyphrase extraction Neural networks Semi-supervised learning 

Notes

Acknowledgement

We thank the anonymous reviewers for their insightful comments on this paper. This work was partially supported by National Natural Science Foundation of China (61572049 and 61273278).

References

  1. 1.
    Ammar, W., Peters, M.E., Bhagavatula, C., Power, R.: The AI2 system at SemEval-2017 Task 10 (ScienceIE): semi-supervised end-to-end entity and relation extraction. In: Proceedings of the 11th International Workshop on Semantic Evaluation, SemEval@ACL 2017, 3–4 August 2017, Vancouver, Canada, pp. 592–596 (2017)Google Scholar
  2. 2.
    Batista, G.E., Bazzan, A.L., Monard, M.C.: Balancing training data for automated annotation of keywords: a case study. In: WOB, pp. 10–18 (2003)Google Scholar
  3. 3.
    Bird, S., Loper, E.: NLTK: the natural language toolkit. In: Proceedings of the ACL 2004 on Interactive Poster and Demonstration Sessions, p. 31. Association for Computational Linguistics (2004)Google Scholar
  4. 4.
    Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. J. Artif. Intell. Res. 16, 321–357 (2002)CrossRefGoogle Scholar
  5. 5.
    Chiu, J.P., Nichols, E.: Named entity recognition with bidirectional LSTM-CNNs. Trans. Assoc. Comput. Linguist. 4, 357–370 (2016)Google Scholar
  6. 6.
    Danesh, S., Sumner, T., Martin, J.H.: Sgrank: combining statistical and graphical methods to improve the state of the art in unsupervised keyphrase extraction. In: Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics, *SEM 2015, 4–5 June 2015, Denver, Colorado, USA, pp. 117–126 (2015)Google Scholar
  7. 7.
    Dong, L., Mallinson, J., Reddy, S., Lapata, M.: Learning to paraphrase for question answering. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, 9–11 September 2017, Copenhagen, Denmark, pp. 875–886 (2017)Google Scholar
  8. 8.
    Hasan, K.S., Ng, V.: Automatic keyphrase extraction: a survey of the state of the art. In: Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (vol. 1: Long Papers), pp. 1262–1273 (2014)Google Scholar
  9. 9.
    Hosseini, H., Kannan, S., Zhang, B., Poovendran, R.: Deceiving Google’s perspective API built for detecting toxic comments. arXiv preprint arXiv:1702.08138 (2017)
  10. 10.
    Huang, Z., Xu, W., Yu, K.: Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991 (2015)
  11. 11.
    Jia, R., Liang, P.: Adversarial examples for evaluating reading comprehension systems. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, 9–11 September 2017, Copenhagen, Denmark, pp. 2021–2031 (2017)Google Scholar
  12. 12.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. CoRR abs/1412.6980 (2014)Google Scholar
  13. 13.
    Liu, Z., Huang, W., Zheng, Y., Sun, M.: Automatic keyphrase extraction via topic decomposition. In: Conference on Empirical Methods in Natural Language Processing, pp. 366–376 (2010)Google Scholar
  14. 14.
    Liu, Z., Li, P., Zheng, Y., Sun, M.: Clustering to find exemplar terms for keyphrase extraction. In: Conference on Empirical Methods in Natural Language Processing, pp. 257–266 (2009)Google Scholar
  15. 15.
    Lopez, P., Romary, L.: HUMB: automatic key term extraction from scientific articles in grobid. In: Proceedings of the 5th International Workshop on Semantic Evaluation, SemEval 2010, pp. 248–251. Association for Computational Linguistics, Stroudsburg (2010)Google Scholar
  16. 16.
    Luan, Y., Ostendorf, M., Hajishirzi, H.: Scientific information extraction with semi-supervised neural tagging. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, 9–11 September 2017, Copenhagen, Denmark, pp. 2641–2651 (2017)Google Scholar
  17. 17.
    Ma, X., Hovy, E.H.: End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, 7–12 August 2016, Berlin, Germany, vol. 1: Long Papers (2016)Google Scholar
  18. 18.
    Mihalcea, R., Tarau, P.: TextRank: bringing order into text. In: Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, EMNLP 2004, A meeting of SIGDAT, a Special Interest Group of the ACL, held in conjunction with ACL 2004, 25–26 July 2004, Barcelona, Spain, pp. 404–411 (2004)Google Scholar
  19. 19.
    Paszke, A., et al.: Automatic differentiation in PyTorch. In: NIPS-W (2017)Google Scholar
  20. 20.
    Pedregosa, F., et al.: Scikit-learn: machine learning in python. J. Mach. Learn. Res. 12(Oct), 2825–2830 (2011)MathSciNetzbMATHGoogle Scholar
  21. 21.
    Pennington, J., Socher, R., Manning, C.D.: Glove: global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, 25–29 October 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pp. 1532–1543 (2014)Google Scholar
  22. 22.
    Samanta, S., Mehta, S.: Towards crafting text adversarial samples. arXiv preprint arXiv:1707.02812 (2017)
  23. 23.
    Schwenker, F.: Ensemble methods: foundations and algorithms [book review]. IEEE Comput. Int. Mag. 8(1), 77–79 (2013)MathSciNetCrossRefGoogle Scholar
  24. 24.
    Tomek, I.: An experiment with the edited nearest-neighbor rule. IEEE Trans. Syst. Man Cybern. SMC–6(6), 448–452 (1976)MathSciNetzbMATHGoogle Scholar
  25. 25.
    Tomek, I.: Two modifications of CNN. IEEE Trans. Syst. Man Cybern. SMC–6(11), 769–772 (1976)MathSciNetzbMATHGoogle Scholar
  26. 26.
    Wang, C., Li, S.: CoRankBayes: Bayesian learning to rank under the co-training framework and its application in keyphrase extraction. In: Proceedings of the 20th ACM International Conference on Information and Knowledge Management, CIKM 2011, pp. 2241–2244. ACM, New York (2011)Google Scholar
  27. 27.
    Wang, C., Li, S., Wang, W.: Experiment research on feature selection and learning method in keyphrase extraction. In: Li, W., Mollá-Aliod, D. (eds.) ICCPOL 2009. LNCS (LNAI), vol. 5459, pp. 305–312. Springer, Heidelberg (2009).  https://doi.org/10.1007/978-3-642-00831-3_29CrossRefGoogle Scholar
  28. 28.
    Wang, L., Li, S.: PKU\_ICL at SemEval-2017 Task 10: Keyphrase extraction with model ensemble and external knowledge. In: Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pp. 934–937 (2017)Google Scholar
  29. 29.
    Yasunaga, M., Kasai, J., Radev, D.: Robust multilingual part-of-speech tagging via adversarial training. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 1 (Long Papers), pp. 976–986. Association for Computational Linguistics (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.School of Mathematical SciencePeking UniversityBeijingChina
  2. 2.Key Laboratory of Computational Linguistics, MOEPeking UniversityBeijingChina
  3. 3.Graduate School of InformaticsKyoto UniversityKyotoJapan

Personalised recommendations