Advertisement

Mapping the evaluation results between quantitative metrics and meta-synthesis from experts’ judgements: evidence from the Supply Chain Management and Logistics journals ranking

  • Lili Yuan
  • Jianping Li
  • Ruoyun Li
  • Xiaoli Lu
  • Dengsheng WuEmail author
Focus

Abstract

Meta-syntheses from experts’ judgements and quantitative metrics are two main forms of evaluation. But they both have limitations. This paper constructs a framework for mapping the evaluation results between quantitative metrics and experts’ judgements such that they may be solved. In this way, the weights of metrics in quantitative evaluation are objectively obtained, and the validity of the results can be testified. Weighted average percentile (WAP) is employed to aggregate different experts’ judgements into standard WAP scores. The Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS) method is used to map quantitative results into experts’ judgements, while WAP scores are equal to the final closeness coefficients generated by the TOPSIS method. However, the closeness coefficients of TOPSIS rely on the weights of quantitative metrics. In this way, the mapping procedure is transformed into an optimization problem, and a genetic algorithm is introduced to search for the best weights. An academic journal ranking in the field of Supply Chain Management and Logistics (SCML) is used to test the validity obtained by mapping results. Four prominent ranking lists from Association of Business Schools, Australian Business Deans Council, German Academic Association for Business Research, and Comité National de la Recherche Scientifique were selected to represent different experts’ judgements. Twelve indices including IF, Eigenfactor Score (ES), H-index, Scimago Journal Ranking, and Source Normalized Impact per Paper (SNIP) were chosen for quantitative evaluation. The results reveal that the mapping results possess high validity for the relative error of experts’ judgements, the quantitative metrics are 43.4%, and the corresponding best weights are determined in the meantime. Thus, some interesting findings are concluded. First, H-index, Impact Per Publication (IPP), and SNIP play dominant roles in the SCML journal’s quality evaluation. Second, all the metrics are positively correlated, although the correlation varies among metrics. For example, ES and NE are perfectly, positively correlated with each other, yet they have the lowest correlation with the other metrics. Metrics such as IF, IFWJ, 5-year IF, and IPP are highly correlated. Third, some highly correlated metrics may perform differently in quality evaluation, such as IPP and 5-year IF. Therefore, when mapping the quantitative metrics and experts’ judgements, academic fields should be treated distinctively.

Keywords

Journal ranking Quantitative metrics Experts’ judgement TOPSIS Supply Chain Management and Logistics 

Notes

Acknowledgements

This research has been supported by grants from the National Natural Science Foundation of China (71874180, 71425002, 71840024), the Key Research Program of Frontier Sciences of the Chinese Academy of Sciences (QYZDB-SSW-SYS036).

Compliance with ethical standards

Conflicts of interest

Neither the entire manuscript nor any part of its content has been published or has been accepted elsewhere. It has not been submitted to any other journal. The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the paper.

References

  1. Atta S, Mahapatra PRS, Mukhopadhyay A (2017) Solving maximal covering location problem using genetic algorithm with local refinement. Soft Comput 22(12):3891–3906Google Scholar
  2. Bao C, Wu D, Li J (2019) A knowledge-based risk measure from the fuzzy multi-criteria decision-making perspective. IEEE Trans Fuzzy Syst.  https://doi.org/10.1109/TFUZZ.2018.2838064 Google Scholar
  3. Behzadian M, Otaghsara SK, Yazdani M, Ignatius J (2012) A state-of the-art survey of TOPSIS applications. Expert Syst Appl 39(17):13051–13069Google Scholar
  4. Bergstrom C (2007) Eigenfactor: measuring the value and prestige of scholarly journals. Coll Res Libr 68(5):314–316Google Scholar
  5. Chen SJ, Hwang CL (1992) Fuzzy multiple attribute decision making: methods and applications. Springer, BerlinzbMATHGoogle Scholar
  6. Ennas G, Biggio B, Di Guardo M (2015) Data-driven journal meta-ranking in business and management. Scientometrics 105(3):1911–1929Google Scholar
  7. Franceschet M (2010) A comparison of bibliometric indicators for computer science scholars and journals on web of science and google scholar. Scientometrics 83(1):243–258Google Scholar
  8. Fahrenkrog G, Polt W, Rojo J, Tübke A, Zinöcker K (2002) RTD evaluation tool box: assessing the socio-economic impact of RTD policies. IPTS Technical Report Series, SevilleGoogle Scholar
  9. Falagas ME, Kouranos VD, Arencibia-Jorge R, Karageorgopoulos DE (2008) Comparison of scimago journal rank indicator with journal impact factor. FASEB J 22(8):2623–2628Google Scholar
  10. Garfield E, Sher IH (1963) New factors in the evaluation of scientific literature through citation indexing. Am Documentation 14(3):195–201Google Scholar
  11. Georgi C, Darkow IL, Kotzab H (2013) Foundations of logistics and supply chain research: a bibliometric analysis of four international journals. Int J Logist Res Appl 16(6):522–533Google Scholar
  12. Goldberg DE (2006) Genetic algorithms. Pearson Education India, DelhiGoogle Scholar
  13. González-Pereira B, Guerrero-Bote VP, Moya-Anegón F (2010) A new approach to the metric of journals’ scientific prestige: the SJR indicator. J Informet 4(3):379–391Google Scholar
  14. Grimm C, Knemeyer M, Polyviou M, Ren X (2015) Supply chain management research in management journals. Int J Phys Distrib Logist Manag 45(5):404–458Google Scholar
  15. Hanna JB, Whiteing AE, Menachof DA, Gibson BJ (2009) An analysis of the value of supply chain management periodicals. Int J Phys Distrib Logist Manag 39(2):145–165Google Scholar
  16. Hirsch J (2005) An index to quantify an individual’s scientific research output. Proc Natl Acad Sci USA 102(46):16569–16572zbMATHGoogle Scholar
  17. Hodge DR, Lacasse JR (2011) Evaluating journal quality: is the h-index a better measure than impact factors? Res Soc Work Pract 21(2):222–230Google Scholar
  18. Hwang C, Yoon K (1981) Multiple attribute decision making: methods and applications. Springer, New YorkzbMATHGoogle Scholar
  19. Kinnear KE (1994) Advances in genetic programming: 1. MIT Press, CambridgeGoogle Scholar
  20. Li J, Wu D, Li J, Li M (2017) A comparison of 17 article-level bibliometric indicators of institutional research productivity: evidence from the information management literature of China. Inf Process Manag 53(5):1156–1170Google Scholar
  21. Li J, Yao X, Sun X, Wu D (2018) Determining the fuzzy measures in multiple criteria decision aiding from the tolerance perspective. Eur J Oper Res 264(2):428–439MathSciNetzbMATHGoogle Scholar
  22. Mahmood K (2017) Correlation between perception-based journal rankings and the journal impact factor (JIF): a systematic review and meta-analysis. Ser Rev 43(2):120–129Google Scholar
  23. Mckinnon AC (2017) Starry-eyed ii: the logistics journal ranking debate revisited. Int J Phys Distrib Logist Manag 47(6):431–446Google Scholar
  24. Mingers J, Leydesdorff L (2015) A review of theory and practice in scientometrics. Eur J Oper Res 246(1):1–19zbMATHGoogle Scholar
  25. Mingers J, Yang LY (2017) Evaluating journal quality: a review of journal citation indicators, and ranking in business and management. Eur J Oper Res 257(1):323–337MathSciNetzbMATHGoogle Scholar
  26. Moed HF (2015) Comprehensive indicator comparisons intelligible to non-experts: the case of two SNIP versions. Scientometrics 106(1):1–15Google Scholar
  27. Murthy YVS, Koolagudi SG (2018) Classification of vocal and non-vocal segments in audio clips using genetic algorithm based feature selection (GAFS). Expert Syst Appl 106:77–91Google Scholar
  28. Peters K, Daniels K, Hodgkinson GP, Haslam SA (2014) Experts’ judgments of management journal quality: an identity concerns model. J Manag 40(7):1785–1812Google Scholar
  29. Quarshie AM, Salmi A, Leuschner R (2016) Sustainability and corporate social responsibility in supply chains: the state of research in supply chain management and business ethics journals. J Purch Supply Manag 22(2):82–97Google Scholar
  30. Rosenthal EC, Weiss HJ (2017) A data envelopment analysis approach for ranking journals. Omega-Int J Manag Sci 70:135–147Google Scholar
  31. Seglen PO (1992) The skewness of science. J Am Soc Inf Sci 43(9):628–638Google Scholar
  32. Straub D, Anderson C (2010) Journal quality and citations: common metrics and considerations about their use. MIS Q 34(1):iii–xiiGoogle Scholar
  33. Templeton GF, Lewis BR (2015) Fairness in the institutional valuation of business journals. MIS Q 39(3):523–539Google Scholar
  34. Upeksha P, Manjula W (2018) Relationship between journal-ranking metrics for a multidisciplinary set of journals. Portal Libr Acad 18(1):35–58Google Scholar
  35. Watson K, Montabon F (2014) A ranking of supply chain management journals based on departmental lists. Int J Prod Res 52(14):4364–4377Google Scholar
  36. West J, Bergstrom T, Bergstrom CT (2010) Big macs and eigenfactor scores: don’t let correlation coefficients fool you. J Assoc Inf Sci Technol 61(9):1800–1807Google Scholar
  37. Wu D, Li M, Zhu X, Song H, Li J (2015) Ranking the research productivity of business and management institutions in Asia-Pacific region: empirical research in leading ABS journals. Scientometrics 105(2):1253–1272Google Scholar
  38. Wu D, Li J, Lu X, Li J (2018) Journal editorship index for assessing the scholarly impact of academic institutions: an empirical analysis in the field of economics. J Informetrics 12(2):448–460Google Scholar
  39. Yoon KP, Hwang CL (1995) Multiple attribute decision making. Sage Publication, Thousand Oaks, CAGoogle Scholar
  40. Yu DJ, Wang WR, Zhang S, Zhang WY, Liu RY (2017) A multiple-link, mutually reinforced journal-ranking model to measure the prestige of journals. Scientometrics 111(1):521–642Google Scholar
  41. Zhu X, Li J, Wu D, Wang H, Liang C (2013) Balancing accuracy, complexity and interpretability in consumer credit decision making: a c-topsis classification approach. Knowl-Based Syst 52(6):258–267Google Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Institutes of Science and DevelopmentChinese Academy of SciencesBeijingChina
  2. 2.University of Chinese Academy of SciencesBeijingChina
  3. 3.Department of Management ScienceNational Natural Science Foundation of ChinaBeijingChina
  4. 4.National Geological Library of ChinaBeijingChina

Personalised recommendations