Advertisement

Quality Classification of Scientific Publications Using Hybrid Summarization Model

  • Hafiz Ahmad Awais Chaudhary
  • Saeed-Ul HassanEmail author
  • Naif Radi Aljohani
  • Ali Daud
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11279)

Abstract

In this paper (Note that the dataset and code to reproduce the results can be accessed at the following URL: https://github.com/slab-itu/hsm), we intend to assess the quality of scientific publications by measuring the relationship between full text papers with that of their abstracts. A hybrid summarization model is proposed that combines text summarization and information retrieval (IR) techniques to classify scientific papers into different ranks based on their abstract correctness. Using the proposed model, we study the relationship between a correctly written abstract (in accordance with full-text) and the scholarly influence of scientific publications. The proposed supervised machine learning model is deployed on 460 full-text publications - randomly downloaded from Social Science Research Network (SSRN). In order to quantify the scholarly influence of publications, a composite score provided by SSRN is used that combines usage indicators along with citation counts. This score is then used to label the publications into high and low ranks. The results determine that the papers having abstracts in accordance with full text also show high scholarly rank with an encouraging accuracy of 73.91%. Finally, 0.701 Area Under the Curve (AUC) for receiver-operating characteristic is achieved that outperforms the traditional IR and summarization models with AUC of 0.536 and 0.58 respectively. Overall our findings suggest that a correctly written abstract in accordance to its full text have a high probability to attract more social usage and citations and vice versa.

Keywords

Hybrid summarization model Classification of scientific publications Information retrieval Summarization Social Science Research Network 

References

  1. 1.
    Chang, H.T., Liu, S.W., Mishra, N.: A tracking and summarization system for online Chinese news topics. Aslib J. Inf. Manag. 67(6), 687–699 (2015)CrossRefGoogle Scholar
  2. 2.
    Luhn, H.P.: The automatic creation of literature abstracts. IBM J. Res. Dev. 2(2), 159–165 (1958)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Das, D., Martins, A.F.: A survey on automatic text summarization. Literature Survey for the Language and Statistics II Course at CMU, vol. 4, pp. 192–195 (2007)Google Scholar
  4. 4.
    Wu, I.C., Vakkari, P.: Supporting navigation in Wikipedia by information visualization: extended evaluation measures. J. Doc. 70(3), 392–424 (2014)CrossRefGoogle Scholar
  5. 5.
    Ravana, S.D., Rajagopal, P., Balakrishnan, V.: Ranking retrieval systems using pseudo relevance judgments. Aslib J. Inf. Manag. 67(6), 700–714 (2015)CrossRefGoogle Scholar
  6. 6.
    Ježek, K., Steinberger, J.: Automatic text summarization (The state of the art 2007 and new challenges). In: Proceedings of Znalosti, pp. 1–12 (2008)Google Scholar
  7. 7.
    Gupta, V., Lehal, G.S.: A survey of text summarization extractive techniques. J. Emerg. Technol. Web Intell. 2(3), 258–268 (2010)Google Scholar
  8. 8.
    Lloret, E.: Text summarization: an overview. Paper supported by the Spanish Government under the project TEXT-MESS (TIN2006-15265-C06-01) (2008)Google Scholar
  9. 9.
    Murray, G.: Abstractive meeting summarization as a Markov decision process. In: Barbosa, D., Milios, E. (eds.) CANADIAN AI 2015. LNCS (LNAI), vol. 9091, pp. 212–219. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-18356-5_19CrossRefGoogle Scholar
  10. 10.
    Fiszman, M., Rindflesch, T.C., Kilicoglu, H.: Abstraction summarization for managing the biomedical research literature. In: Proceedings of the HLT-NAACL Workshop on Computational Lexical Semantics, pp. 76–83. Association for Computational Linguistics (2004)Google Scholar
  11. 11.
    Hjørland, B., Nissen Pedersen, K.: A substantive theory of classification for information retrieval. J. Doc. 61(5), 582–597 (2005)CrossRefGoogle Scholar
  12. 12.
    Galvez, C., de Moya-Anegón, F., Solana, V.H.: Term conflation methods in information retrieval: non-linguistic and linguistic approaches. J. Doc. 61(4), 520–547 (2005)CrossRefGoogle Scholar
  13. 13.
    Pontes, E.L., Huet, S., Torres-Moreno, J.-M., Linhares, A.C.: Automatic text summarization with a reduced vocabulary using continuous space vectors. In: Métais, E., Meziane, F., Saraee, M., Sugumaran, V., Vadera, S. (eds.) NLDB 2016. LNCS, vol. 9612, pp. 440–446. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-41754-7_46CrossRefGoogle Scholar
  14. 14.
    Bordogna, G., Pasi, G.: Application of fuzzy set theory to extend boolean information retrieval. In: Crestani, F., Pasi, G. (eds.) Soft Computing in Information Retrieval. Studies in Fuzziness and Soft Computing, vol. 50, pp. 21–47. Physica, Heidelberg (2000).  https://doi.org/10.1007/978-3-7908-1849-9_2CrossRefzbMATHGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Information Technology UniversityLahorePakistan
  2. 2.King Abdulaziz UniversityJeddahSaudi Arabia

Personalised recommendations