Indicating Studies’ Quality Based on Open Data in Digital Libraries
Researchers publish papers to report their research results and, thus, contribute to a steadily growing corpus of knowledge. To not unintentionally repeat research and studies, researchers need to be aware of the existing corpus. For this purpose, they crawl digital libraries and conduct systematic literature reviews to summarize existing knowledge. However, there are several issues concerned with such approaches: Not all documents are available to every researcher, results may not be found due to ranking algorithms, and it requires time and effort to manually assess the quality of a document. In this paper, we provide an overview of the publicly available information of different digital libraries in computer science. Based on these results, we derive a taxonomy to describe the connections between this information and discuss their suitability for quality assessments. Overall, we observe that bibliographic data and simple citation counts are available in almost all libraries, with some of them providing rather unique information. Some of this information may be used to improve automated quality assessment, but with limitations.
KeywordsCitation counts Quality assessment Literature analysis Digital libraries
This research is supported by the DAAD STIBET Matching Funds grant.
- 1.Beel, J., Gipp, B.: Google Scholar’s ranking algorithm: the impact of citation counts (an empirical study). In: International Conference on Research Challenges in Information Systems (RCIS), pp. 439–446 (2009)Google Scholar
- 4.Bornmann, L., Schier, H., Marx, W., Daniel, H.D.: What factors determine citation counts of publications in chemistry besides their quality? J. Inf. 6(1), 11–18 (2012)Google Scholar
- 9.Harnad, S.: Open access scientometrics and the UK research assessment exercise. In: Conference of the International Society for Scientometrics and Informetrics, pp. 27–33 (2007)Google Scholar
- 10.Harter, S.P.: Scholarly communication and the digital library: problems and issues. J. Digital Inf. 1(1), 147–156 (2006)Google Scholar
- 13.Jacso, P.: As we may search - comparison of major features of the web of science, scopus, and google scholar citation-based and citation-enhanced databases. Curr. Sci. 89(9), 1537–1547 (2005)Google Scholar
- 14.Kitchenham, B.A., Charters, S.: Guidelines for performing systematic literature reviews in software engineering. Technical report, Keele University and University of Durham (2007)Google Scholar
- 16.Lausberger, C.: Konzeption von Suchprozessen und Suchstrategien für systematische Literatur Reviews. Master’s thesis, University of Magdeburg, German (2017)Google Scholar
- 24.Schröter, I., Krüger, J., Ludwig, P., Thiel, M., Nürnberger,A., Leich, T.: Identifying innovative documents: quo vadis? In: International Conference on Enterprise Information Systems (ICEIS), pp.653–658. ScitePress (2017)Google Scholar
- 26.Shakeel, Y.: Supporting quality assessment in systematic literature reviews. Master’s thesis, University of Magdeburg (2017)Google Scholar
- 27.Shakeel, Y., et al.: (Automated) literature analysis - threats and experiences. In: International Workshop on Software Engineering for Science (SE4Science), pp. 20–27. ACM (2018)Google Scholar
- 29.Walter, G., Bloch, S., Hunt, G., Fisher, K.: Counting on citations: a flawed way to measure quality. Med. J. Australia 178(6), 280–281 (2003)Google Scholar
- 30.Zhang, H., Ali Babar, M.: On searching relevant studies in software engineering. In: International Conference on Evaluation and Assessment in Software Engineering (EASE), pp. 111–120. BCS Learning & Development Ltd. (2010)Google Scholar