Advertisement

Keyphrase Extraction Based on Optimized Random Walks on Multiple Word Relations

  • Wenyan Chen
  • Zheng LiuEmail author
  • Wei Shi
  • Jeffrey Xu Yu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10988)

Abstract

Extracting keyphrases from documents helps to reduce the document information and further assist in information retrieval. In this paper, we construct a multi-relational graph by considering heterogeneous latent word relations (the co-occurrence and the semantic) in a document. Then we optimize the random walks on the multi-relational graph to determine the importance of each node to further generate keyphrases. Experimental results show that our method outperforms the previous methods.

Keywords

Keyphrase extraction Multi-relational graph Optimized random walks 

Notes

Acknowledgements

This work is supported in part by Jiangsu Provincial Natural Science Foundation of China under Grant BK20171447, Jiangsu Provincial University Natural Science Research of China under Grant 17KJB520024, and Nanjing University of Posts and Telecommunications under Grant No. NY215045.

References

  1. 1.
    Boudin, F.: A comparison of centrality measures for graph-based keyphrase extraction. In: Sixth International Joint Conference on Natural Language Processing, IJCNLP 2013, Nagoya, Japan, 14–18 October 2013, pp. 834–838 (2013)Google Scholar
  2. 2.
    Hammouda, K.M., Matute, D.N., Kamel, M.S.: CorePhrase: keyphrase extraction for document clustering. In: Perner, P., Imiya, A. (eds.) MLDM 2005. LNCS (LNAI), vol. 3587, pp. 265–274. Springer, Heidelberg (2005).  https://doi.org/10.1007/11510888_26CrossRefGoogle Scholar
  3. 3.
    Hulth, A.: Improved automatic keyword extraction given more linguistic knowledge. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP 2003, Sapporo, Japan, 11–12 July 2003 (2003)Google Scholar
  4. 4.
    Mihalcea, R., Tarau, P.: TextRank: Bringing Order into Texts, pp. 404–411. UNT Scholarly Works (2004)Google Scholar
  5. 5.
    Ng, M.K., Li, X., Ye, Y.: Multirank: co-ranking for objects and relations in multi-relational data. In: Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Diego, CA, USA, 21–24 August 2011, pp. 1217–1225 (2011).  https://doi.org/10.1145/2020408.2020594
  6. 6.
    Over, P.: Introduction to DUC-2001: an intrinsic evaluation of generic news text summarization systems. In: DUC 2001 Workshop on Text Summarization (2001)Google Scholar
  7. 7.
    Page, L.: The PageRank citation ranking: bringing order to the web. Stanf. Digit. Libr. Work. Pap. 9(1), 1–14 (1998)Google Scholar
  8. 8.
    Shi, W., Liu, Z., Zheng, W., Yu, J.X.: Extracting keyphrases using heterogeneous word relations. In: Huang, Z., Xiao, X., Cao, X. (eds.) ADC 2017. LNCS, vol. 10538, pp. 165–177. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-68155-9_13CrossRefGoogle Scholar
  9. 9.
    Tsatsaronis, G., Varlamis, I., Nørvåg, K.: SemanticRank: ranking keywords and sentences using semantic graphs. In: 23rd International Conference on Computational Linguistics, Proceedings of the Conference, COLING 2010, 23–27 August 2010, Beijing, China, pp. 1074–1082 (2010)Google Scholar
  10. 10.
    Wan, X., Xiao, J.: Exploiting neighborhood knowledge for single document summarization and keyphrase extraction. ACM Trans. Inf. Syst. 28(2), 8:1–8:34 (2010).  https://doi.org/10.1145/1740592.1740596CrossRefGoogle Scholar
  11. 11.
    Witten, I.H., Paynter, G.W., Frank, E., Gutwin, C., Nevill-Manning, C.G.: KEA: practical automatic keyphrase extraction. In: Proceedings of the Fourth ACM Conference on Digital Libraries, Berkeley, CA, USA, 11–14 August 1999, pp. 254–255 (1999).  https://doi.org/10.1145/313238.313437
  12. 12.
    Yan, L., Dodier, R., Mozer, M.C., Wolniewicz, R.: Optimizing classifier performance via an approximation to the Wilcoxon-Mann-Whitney statistic. In: Machine Learning, Proceedings of the Twentieth International Conference, pp. 848–855 (2003)Google Scholar
  13. 13.
    Youn, E., Jeong, M.K.: Class dependent feature scaling method using naive bayes classifier for text datamining. Pattern Recogn. Lett. 30(5), 477–485 (2009).  https://doi.org/10.1016/j.patrec.2008.11.013CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Wenyan Chen
    • 1
    • 2
  • Zheng Liu
    • 1
    • 2
    Email author
  • Wei Shi
    • 3
  • Jeffrey Xu Yu
    • 3
  1. 1.Jiangsu Key Laboratory of Big Data Security and Intelligent ProcessingNanjingChina
  2. 2.School of Computer ScienceNanjing University of Posts and TelecommunicationsNanjingChina
  3. 3.The Chinese University of Hong KongSha TinHong Kong

Personalised recommendations