Skip to main content
Log in

Measuring the match between evaluators and evaluees: cognitive distances between panel members and research groups at the journal level

  • Published:
Scientometrics Aims and scope Submit manuscript

Abstract

When research groups are evaluated by an expert panel, it is an open question how one can determine the match between panel and research groups. In this paper, we outline two quantitative approaches that determine the cognitive distance between evaluators and evaluees, based on the journals they have published in. We use example data from four research evaluations carried out between 2009 and 2014 at the University of Antwerp.

While the barycenter approach is based on a journal map, the similarity-adapted publication vector (SAPV) approach is based on the full journal similarity matrix. Both approaches determine an entity’s profile based on the journals in which it has published. Subsequently, we determine the Euclidean distance between the barycenter or SAPV profiles of two entities as an indicator of the cognitive distance between them. Using a bootstrapping approach, we determine confidence intervals for these distances. As such, the present article constitutes a refinement of a previous proposal that operates on the level of Web of Science subject categories.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. The Science and Social Science Editions 2011 contain 8281 and 2943 journals respectively. Of these journals, 549 are contained in both databases.

  2. http://www.numpy.org/ and http://scipy.org.

References

  • Abramo, G., & D’Angelo, C. A. (2011). Evaluating research: From informed peer review to bibliometrics. Scientometrics, 87(3), 499–514.

    Article  Google Scholar 

  • Barker, K. (2007). The UK research assessment exercise: The evolution of a national research evaluation system. Research Evaluation, 16(1), 3–12. doi:10.3152/095820207X190674.

    Article  Google Scholar 

  • Berendsen, R., de Rijke, M., Balog, K., Bogers, T., & Bosch, A. (2013). On the assessment of expertise profiles. Journal of the American Society for Information Science and Technology, 64(10), 2024–2044. doi:10.1002/asi.22908.

    Article  Google Scholar 

  • Bornmann, L., Mutz, R., Marx, W., Schier, H., & Daniel, H.-D. (2011). A multilevel modelling approach to investigating the predictive validity of editorial decisions: Do the editors of a high profile journal select manuscripts that are highly cited after publication? Journal of the Royal Statistical Society: Series A (Statistics in Society), 174(4), 857–879. doi:10.1111/j.1467-985X.2011.00689.x.

    Article  MathSciNet  Google Scholar 

  • Borum, F., & Hansen, H. F. (2000). The local construction and enactment of standards for research evaluation: The case of the Copenhagen Business School. Evaluation, 6(3), 281–299. doi:10.1177/13563890022209299.

    Article  Google Scholar 

  • Boyack, K. W., Chen, M.-C., & Chacko, G. (2014). Characterization of the peer review network at the center for scientific review, National Institutes of Health. PLoS ONE, 9(8), e104244. doi:10.1371/journal.pone.0104244.

    Article  Google Scholar 

  • Boyack, K. W., & Klavans, R. (2014). Creation of a highly detailed, dynamic, global model and map of science. Journal of the Association for Information Science and Technology, 65(4), 670–685. doi:10.1002/asi.22990.

    Article  Google Scholar 

  • Buckley, H. L., Sciligo, A. R., Adair, K. L., Case, B. S., & Monks, J. M. (2014). Is there gender bias in reviewer selection and publication success rates for the New Zealand Journal of Ecology? New Zealand Journal of Ecology, 38(2), 335–339.

    Google Scholar 

  • Butler, L., & McAllister, I. (2011). Evaluating university research performance using metrics. European Political Science, 10(1), 44–58. doi:10.1057/eps.2010.13.

    Article  Google Scholar 

  • Chen, S., Arsenault, C., Gingras, Y., & Lariviere, V. (2015). Exploring the interdisciplinary evolution of a discipline: The case of biochemistry and molecular biology. Scientometrics, 102(2), 1307–1323. doi:10.1007/s11192-014-1457-6.

    Article  Google Scholar 

  • Cohen, W. M., & Levinthal, D. A. (1989). Innovation and learning: The two faces of R&D. The Economic Journal, 99(397), 569–596. doi:10.2307/2233763.

    Article  Google Scholar 

  • Cohen, W. M., & Levinthal, D. A. (1990). Absorptive capacity: A new perspective on learning and innovation. Administrative Science Quarterly, 35(1), 128–152. doi:10.2307/2393553.

    Article  Google Scholar 

  • Coryn, C. L. S., & Scriven, M. (2008). Editor’s notes. In C. L. S. Coryn & M. Scriven (Eds.), Reforming the evaluation of research: New directions for evaluation (Vol. 118, pp. 1–5). California: American Evaluation Association.

    Google Scholar 

  • Efron, B., & Tibshirani, R. J. (1998). An introduction to the bootstrap. Boca Raton, FL: Chapman & Hall/CRC.

    MATH  Google Scholar 

  • Egghe, L., & Rousseau, R. (1990). Introduction to informetrics. Elsevier. Retrieved from https://uhdspace.uhasselt.be/dspace/handle/1942/587.

  • Engels, T. C. E., Goos, P., Dexters, N., & Spruyt, E. H. J. (2013). Group size, h-index, and efficiency in publishing in top journals explain expert panel assessments of research group quality and productivity. Research Evaluation, 22(4), 224–236. doi:10.1093/reseval/rvt013.

    Article  Google Scholar 

  • Engels, T. C. E., Ossenblok, T. L. B., & Spruyt, E. H. J. (2012). Changing publication patterns in the social sciences and humanities, 2000–2009. Scientometrics, 93(2), 373–390.

    Article  Google Scholar 

  • ESF. (2011). European peer review guide: Integrating policies and practices into coherent procedures. Strasbourg: European Science Foundation.

    Google Scholar 

  • Fields, C. (2015). How small is the center of science? Short cross-disciplinary cycles in co-authorship graphs. Scientometrics, 102(2), 1287–1306. doi:10.1007/s11192-014-1468-3.

    Article  Google Scholar 

  • Gorjiara, T., & Baldock, C. (2014). Nanoscience and nanotechnology research publications: A comparison between Australia and the rest of the world. Scientometrics, 100(1), 121–148. doi:10.1007/s11192-014-1287-6.

    Article  Google Scholar 

  • Gould, T. H. P. (2013). Do we still need peer review? An argument for change (Vol. 65). Plymouth: Scarecrow Press.

    Google Scholar 

  • Grauwin, S., & Jensen, P. (2011). Mapping scientific institutions. Scientometrics, 89(3), 943–954. doi:10.1007/s11192-011-0482-y.

    Article  Google Scholar 

  • Hansson, F. (2010). Dialogue in or with the peer review? Evaluating research organizations in order to promote organizational learning. Science and Public Policy, 37(4), 239–251. doi:10.3152/030234210X496600.

    Article  Google Scholar 

  • Hashemi, S. H., Neshati, M., & Beigy, H. (2013). Expertise retrieval in bibliographic network: A topic dominance learning approach. In Proceedings of the 22nd ACM international conference on information & knowledge management (pp. 1117–1126). San Francisco, US: ACM. doi:10.1145/2505515.2505697.

  • Hofmann, K., Balog, K., Bogers, T., & de Rijke, M. (2010). Contextual factors for finding similar experts. Journal of the American Society for Information Science and Technology, 61(5), 994–1014. doi:10.1002/asi.21292.

    Article  Google Scholar 

  • Jin, B., & Rousseau, R. (2001). An introduction to the barycentre method with an application to China’s mean centre of publication. Libri, 51(4), 225–233. doi:10.1515/LIBR.2001.225.

    Article  Google Scholar 

  • Kamada, T., & Kawai, S. (1989). An algorithm for drawing general undirected graphs. Information Processing Letters, 31(1), 7–15. doi:10.1016/0020-0190(89)90102-6.

    Article  MathSciNet  MATH  Google Scholar 

  • Kington, J. (2014). Balanced cross sections, shortening estimates, and the magnitude of out-of-sequence thrusting in the Nankai Trough accretionary prism. Japan: Figshare. doi:10.6084/m9.figshare.1015774.v1.

    Book  Google Scholar 

  • Lawrenz, F., Thao, M., & Johnson, K. (2012). Expert panel reviews of research centers: The site visit process. Evaluation and Program Planning, 35(3), 390–397. doi:10.1016/j.evalprogplan.2012.01.003.

    Article  Google Scholar 

  • Lee, C. J., Sugimoto, C. R., Zhang, G., & Cronin, B. (2013). Bias in peer review. Journal of the American Society for Information Science and Technology, 64(1), 2–17. doi:10.1002/asi.22784.

    Article  Google Scholar 

  • Leydesdorff, L., & de Nooy, W. (2015). Can “Hot Spots” in the sciences be mapped using the dynamics of aggregated journal-journal citation relations? Retrieved from http://arxiv.org/abs/1502.00229.

  • Leydesdorff, L., Heimeriks, G., & Rotolo, D. (2015). Journal portfolio analysis for countries, cities, and organizations: Maps and comparisons. Journal of the Association for Information Science and Technology,. doi:10.1002/asi.23551.

    Article  Google Scholar 

  • Leydesdorff, L., & Rafols, I. (2012). Interactive overlays: A new method for generating global journal maps from web-of-science data. Journal of Informetrics, 6(2), 318–332. doi:10.1016/j.joi.2011.11.003.

    Article  Google Scholar 

  • Leydesdorff, L., Rafols, I., & Chen, C. (2013). Interactive overlays of journals and the measurement of interdisciplinarity on the basis of aggregated journal–journal citations. Journal of the American Society for Information Science and Technology, 64(12), 2573–2586. doi:10.1002/asi.22946.

    Article  Google Scholar 

  • Li, D., & Agha, L. (2015). Big names or big ideas: Do peer-review panels select the best science proposals? Science, 348(6233), 434–438. doi:10.1126/science.aaa0185.

    Article  Google Scholar 

  • McKenna, H. P. (2015). Research assessment: The impact of impact. International Journal of Nursing Studies, 52(1), 1–3. doi:10.1016/j.ijnurstu.2014.11.012.

    Article  Google Scholar 

  • Milat, A. J., Bauman, A. E., & Redman, S. (2015). A narrative review of research impact assessment models and methods. Health Research Policy and Systems, 13, 18. doi:10.1186/s12961-015-0003-1.

    Article  Google Scholar 

  • Molas-Gallart, J. (2012). Research governance and the role of evaluation: A comparative study. American Journal of Evaluation, 33(4), 583–598. doi:10.1177/1098214012450938.

    Article  Google Scholar 

  • Nedeva, M., Georghiou, L., Loveridge, D., & Cameron, H. (1996). The use of co-nomination to identify expert participants for technology foresight. R&D Management, 26(2), 155–168.

    Article  Google Scholar 

  • Neshati, M., Beigy, H., & Hiemstra, D. (2012). Multi-aspect group formation using facility location analysis. In Proceedings of the seventeenth Australasian document computing symposium (pp. 62–71). New York: ACM. doi:10.1145/2407085.2407094.

  • Nooteboom, B. (1999). Inter-firm alliances: Analysis and design. London: Routledge.

    Book  Google Scholar 

  • Nooteboom, B. (2000). Learning by interaction: Absorptive capacity, cognitive distance and governance. Journal of Management and Governance, 4(1–2), 69–92.

    Article  Google Scholar 

  • Nooteboom, B., Van Haverbeke, W., Duysters, G., Gilsing, V., & van den Oord, A. (2007). Optimal cognitive distance and absorptive capacity. Research Policy, 36(7), 1016–1034. doi:10.1016/j.respol.2007.04.003.

    Article  Google Scholar 

  • Oleinik, A. (2014). Conflict(s) of interest in peer review: Its origins and possible solutions. Science and Engineering Ethics, 20(1), 55–75. doi:10.1007/s11948-012-9426-z.

    Article  Google Scholar 

  • Pina, D. G., Hren, D., & Marušić, A. (2015). Peer review evaluation process of Marie Curie actions under EU’s seventh framework programme for research. PLoS ONE, 10(6), e0130753. doi:10.1371/journal.pone.0130753.

    Article  Google Scholar 

  • Rafols, I., Porter, A. L., & Leydesdorff, L. (2010). Science overlay maps: A new tool for research policy and library management. Journal of the American Society for Information Science and Technology, 61(9), 1871–1887. doi:10.1002/asi.21368.

    Article  Google Scholar 

  • Rahm, E. (2008). Comparing the scientific impact of conference and journal publications in computer science. Information Services and Use, 28(2), 127–128.

    Article  Google Scholar 

  • Rahman, A. I. M. J., Guns, R., Rousseau, R., & Engels, T. C. E. (2014). Assessment of expertise overlap between an expert panel and research groups. In E. Noyons (Ed.), Context counts: Pathways to master big and little data. Proceedings of the science and technology indicators conference 2014 Leiden (pp. 295–301). Leiden: Universiteit Leiden.

  • Rahman, A. I. M. J., Guns, R., Rousseau, R., & Engels, T. C. E. (2015). Is the expertise of evaluation panels congruent with the research interests of the research groups: A quantitative approach based on barycenters. Journal of Informetrics, 9(4), 704–721. doi:10.1016/j.joi.2015.07.009.

    Article  Google Scholar 

  • Rons, N., De Bruyn, A., & Cornelis, J. (2008). Research evaluation per discipline: A peer-review method and its outcomes. Research Evaluation, 17(1), 45–57. doi:10.3152/095820208X240208.

    Article  Google Scholar 

  • Rousseau, R. (1989). Kinematical statistics of scientific output. Part I: Geographical approach. Revue Française de Bibliométrie, 4, 50–64.

    Google Scholar 

  • Rousseau, R. (2008). Triad or tetrad: Another representation. ISSI Newsletter, 4(1), 5–7.

    Google Scholar 

  • Rousseau, R., Rahman, A. I. M. J., Guns, R., & Engels, T. C. E. (2016). A note and a correction on measuring cognitive distance in multiple dimensions. Retrieved from http://arxiv.org/abs/1602.05183v2.

  • Rybak, J., Balog, K., & Nørvåg, K. (2014). ExperTime: Tracking expertise over time. In Proceedings of the 37th international ACM SIGIR conference on research & development in information retrieval (pp. 1273–1274). Broadbeach: ACM. doi:10.1145/2600428.2611190.

  • Simon, D., & Knie, A. (2013). Can evaluation contribute to the organizational development of academic institutions? An international comparison. Evaluation, 19(4), 402–418. doi:10.1177/1356389013505806.

    Article  Google Scholar 

  • Sobkowicz, P. (2015). Innovation suppression and clique evolution in peer-review-based, competitive research funding systems: An agent-based model. Journal of Artificial Societies and Social Simulation, 18(2), 13.

    Article  Google Scholar 

  • Tseng, Y. H., & Tsay, M. Y. (2013). Journal clustering of library and information science for subfield delineation using the bibliometric analysis toolkit: CATAR. Scientometrics, 95(2), 503–528. doi:10.1007/s11192-013-0964-1.

    Article  Google Scholar 

  • van den Besselaar, P., & Leydesdorff, L. (2009). Past performance, peer review and project selection: A case study in the social and behavioral sciences. Research Evaluation, 18(4), 273–288. doi:10.3152/095820209X475360.

    Article  Google Scholar 

  • van Eck, N. J., & Waltman, L. (2007). VOS: A new method for visualizing similarities between objects. In R. Decker & H.-J. Lenz (Eds.), Advances in data analysis: Proceedings of the 30th annual conference of the German Classification Society advances in data analysis (pp. 299–306). London: Springer.

  • van Eck, N. J., & Waltman, L. (2010). Software survey: VOSviewer, a computer program for bibliometric mapping. Scientometrics, 84(2), 523–538. doi:10.1007/s11192-009-0146-3.

    Article  Google Scholar 

  • van Eck, N. J., Waltman, L., Dekker, R., & van den Berg, J. (2010). A comparison of two techniques for bibliometric mapping: Multidimensional scaling and VOS. Journal of the American Society for Information Science and Technology, 61(12), 2405–2416. doi:10.1002/asi.21421.

    Article  Google Scholar 

  • Verleysen, F. T., & Engels, T. C. E. (2013). Measuring internationalisation of book publishing in the social sciences and humanities using the barycentre method. In J. Gorraiz, E. Schiebel, C. Gumpenberger, M. Horlesberger, & H. Moed (Eds.), Proceedings of the 14th international society of scientometrics and informetrics conference (ISSI), 1519 July 2013 (pp. 1170–1176). Vienna, Austria.

  • Verleysen, F. T., & Engels, T. C. E. (2014). Barycenter representation of book publishing internationalization in the social sciences and humanities. Journal of Informetrics, 8(1), 234–240. doi:10.1016/j.joi.2013.11.008.

    Article  Google Scholar 

  • VSNU. (2003). Standard evaluation protocol 2003–2009 for public research organisations. Utrecht/den Haag/Amsterdam: VSNU, NWO and KNAW.

    Google Scholar 

  • VSNU. (2009). Standard evaluation protocol 2009–2015: Protocol for research assessment in The Netherlands. Utrecht/den Haag/Amsterdam: VSNU, NWO and KNAW.

    Google Scholar 

  • Waltman, L., & van Eck, N. J. (2012). A new methodology for constructing a publication-level classification system of science. Journal of the American Society for Information Science and Technology, 63(12), 2378–2392. doi:10.1002/asi.22748.

    Article  Google Scholar 

  • Wang, Q., & Sandström, U. (2015). Defining the role of cognitive distance in the peer review process with an explorative study of a grant scheme in infection biology. Research Evaluation, 24(3), 271–281. doi:10.1093/reseval/rvv009.

    Article  Google Scholar 

  • Wessely, S. (1998). Peer review of grant applications: What do we know? The Lancet, 352(9124), 301–305. doi:10.1016/S0140-6736(97)11129-1.

    Article  Google Scholar 

Download references

Acknowledgments

The authors thank Ronald Rousseau for stimulating and insightful suggestions related to the topic of the paper and Thomson Reuters for making the journal citation data available. This investigation has been made possible by the financial support of the Flemish government to ECOOM, among others. The opinions in the paper are the authors’ and not necessarily those of the government. We thank the reviewers for their constructive remarks.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. I. M. Jakaria Rahman.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (PDF 2496 kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rahman, A.I.M.J., Guns, R., Leydesdorff, L. et al. Measuring the match between evaluators and evaluees: cognitive distances between panel members and research groups at the journal level. Scientometrics 109, 1639–1663 (2016). https://doi.org/10.1007/s11192-016-2132-x

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-016-2132-x

Keywords

Navigation