Skip to main content
Log in

What about excellence in teaching? A benevolent ranking of universities

  • Published:
Scientometrics Aims and scope Submit manuscript

Abstract

Existing university rankings apply fixed and exogenous weights based on a theoretical framework, stakeholder or expert opinions. Fixed weights cannot embrace all requirements of a ‘good ranking’ according to the Berlin Principles. As the strengths of universities differ, the weights on the ranking should differ as well. This paper proposes a fully nonparametric methodology to rank universities. The methodology is in line with the Berlin Principles. It assigns to each university the weights that maximize (minimize) the impact of the criteria where university performs relatively well (poor). The method accounts for background characteristics among universities and evaluates which characteristics have an impact on the ranking. In particular, it accounts for the level of tuition fees, an English speaking environment, size, research or teaching orientation. In general, medium sized universities in English speaking countries benefit from the benevolent ranking. On the contrary, we observe that rankings with fixed weighting schemes reward large and research oriented universities. Especially Swiss and German universities significantly improve their position in a more benevolent ranking.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. While the term ‘university’ seems to neglect that (1) higher education may be supplied by institutions of other names (e.g., polytechnics) and (2) research may be supplied by institutions of other names (e.g., Max Planck or CNRS institutes in Germany and France, or Academy institutes in Eastern Europe), this paper uses the term ‘universities’ for all those institutions.

  2. Note that we do not argue to ‘compensate’ universities for variables they can influence, but only for variables which are rather exogenous (e.g., due to government decisions). On the one hand, if university performance is considered as an ‘absolute competition’, one can argue that there is no need to account for background characteristics. If so, the proposed model can be easily adapted to neglect exogenous conditions. On the other hand, if performance is a relative competition, the proposed model aims at providing a ranking which facilitates a fair comparison among universities (see also De Witte and Rogge 2010, 2011 for an extensive discussion). We present below both the ‘compensated’ ranking and the ‘uncompensated’ ranking.

  3. For an overview see www.shanghairanking.com/resources.html.

  4. http://www.topuniversities.com/world-university-rankings/english-language-research-still-big-advantage-global-rankings.

  5. At least in the short run, for all universities fees are rigid.

  6. Note that the THES applied a different methodology since 2010.

  7. For more see http://www.topuniversities.com/university-rankings/world-university-rankings/methodology/classifications.

  8. For more see http://www.topuniversities.com/university-rankings/world-university-rankings/methodology/classifications.

  9. In particular, BOD formally corresponds to a DEA model where the input values are a vector of ones.

  10. Note that the proposed model can easily be extended to weight restrictions (e.g., Cherchye et al. 2007). We decided not to do so in order to maintain the endogenous character of the weights.

References

  • Altbach, P. (2006). The dilemmas of ranking. International Higher Education, 42, 2–3.

    Google Scholar 

  • Billaut, J.-C., Bouyssou, D., & Vincke, P. (2010). Should you believe in the Shanghai ranking? Scientometrics, 84(1), 237–263.

    Article  Google Scholar 

  • Boulton, G. (2010) University rankings: Diversity, excellence And the european initiative. League Of European Research Universities. Advice Paper Nr.3, June 2010.

  • Bowman, N. A., & Bastedo, M. N. (2010). Anchoring effects in world university rankings: Exploring biases in reputation scores. Higher Education, 61(4), 431–444.

    Article  Google Scholar 

  • Cazals, C., Florens, J. P., & Simar, L. (2002). Nonparametric frontier estimation: A robust approach. Journal of Econometrics, 106(1), 1–25.

    Article  MathSciNet  MATH  Google Scholar 

  • Charnes, A., Cooper, W. W., & Rhodes, E. (1978). Measuring the efficiency of decision making units. European Journal of Operational Research, 2, 429–444.

    Article  MathSciNet  MATH  Google Scholar 

  • Cheng, Y., & Liu, N. (2008). Examining major rankings according to the Berlin principles. Higher Education in Europe, 33(2–3), 201–208.

    Article  Google Scholar 

  • Cherchye, L., Moesen, W., Rogge, N., & Van Puyenbroeck, T. (2007). An introduction to ‘benefit of the doubt’ composite indicators. Social Indicators Research, 82, 111–145.

    Article  Google Scholar 

  • CHERI. (2008). Counting what is measured or measuring what counts? League tables and their impact on higher education institutions in England. Report to HEFCE by the Centre for Higher Education Research and Information (CHERI), Open University, and Hobsons Research, Issues Paper, April 2008/14.

  • Clarke, M. (2007). The impact of higher education rankings on student access, choice, and opportunity. Higher Education in Europe, 32(1), 59–70.

    Article  Google Scholar 

  • Coelli, M. (2009). Tuition fees and equality of university enrolment. Canadian Journal of Economics, 42(3), 1072–1099.

    Article  Google Scholar 

  • Daraio, C., & Simar, L. (2005). Introducing Environmental variables in nonparametric frontier models: A probabilistic approach. Journal of Productivity Analysis, 24(1), 93–121.

    Article  Google Scholar 

  • De Witte, K., & Kortelainen, M. (2013). What explains performance of students in a heterogeneous environment? Conditional efficiency estimation with continuous and discrete environmental variables. Applied Economics, 45(17), 2401–2412.

    Article  Google Scholar 

  • De Witte, K., & Rogge, N. (2010). To publish or not to publish? On the aggregation and drivers of research performance. Scientometrics, 85(3), 657–680.

    Article  Google Scholar 

  • De Witte, K., & Rogge, N. (2011). Accounting for exogenous influences in performance evaluations of teachers. Economics of Education Review, 30(4), 641–653.

    Article  Google Scholar 

  • Eccles, C. (2002). The use of university rankings in the United Kingdom. Higher Education in Europe, 27(4), 423–432.

    Article  Google Scholar 

  • Enserink, M. (2007). Who ranks the university rankers? Science, 317, 1026–1028.

    Google Scholar 

  • Farrell, M. J. (1957). The measurement of productive efficiency. Journal of the Royal Statistical Society, 120(3), 253–290.

    Article  Google Scholar 

  • Federkeil, G. (2008). Rankings and quality assurance in higher education. Higher Education in Europe, 33(2–3), 219–223.

    Article  Google Scholar 

  • Griffith, A., & Rask, K. (2007). The influence of the U.S. News and World Report collegiate rankings on the matriculation decision of high-ability students: 1995–2004. Economics of Education Review, 26(2), 1–12.

    Article  Google Scholar 

  • Harvey, L. (2008a). Rankings of higher education institutions: A critical review, editorial. Quality in Higher Education, 14(3), 187–207.

    Article  Google Scholar 

  • Harvey, L. (2008b). Assaying improvement, paper presented at the 30th EAIR Forum, Copenhagen, Denmark, 24–27 August.

  • Hazelkorn, E. (2007). The impact of league tables and ranking systems on higher education decision making. Higher Education Management and Policy, 19, 2.

    Google Scholar 

  • Institute for Higher Education Policy (IHEP). (2007). College and university ranking systems. Global perspectives and American challenges. Washington, DC: Institute for Higher Education Policy.

  • International Ranking Expert Group (IREG). (2006). Berlin principles on ranking of higher education institutions. www.che.de/downloads/Berlin_Principles_IREG_534.pdf.

  • JRC/OECD. (2008). Handbook on constructing composite indicators. Methodology and user guide, OECD Publishing, ISBN 978-92-64-04345-9.

  • Kälvemark, T. (2007). University ranking systems: A critique. Technical report, Irish Universities Quality Board.

  • Marcucci, P. N., & Johnstone, D. B. (2007). Tuition fee policies in a comparative perspective: Theoretical and political rationales. Journal of Higher Education Policy and Management, 29(1), 25–40.

    Article  Google Scholar 

  • Marginson, S. (2007). Global university rankings: Implications in general and for Australia. Journal of Higher Education Policy and Management, 29(2), 131–142.

    Article  Google Scholar 

  • Marginson, S., & van der Wende, M. C. (2007). To rank or to be ranked: The impact of global rankings in higher education. Journal for Studies in International Education, 11(3–4), 306–329.

    Article  Google Scholar 

  • Melyn, W., & Moesen, W. (1991). Towards a synthetic indicator of macroeconomic performance: Unequal weighting when limited information is available. Public economics research paper, 17, CES, KULeuven.

  • Merisotis, J. P. (2002). On the ranking of higher education institutions. Higher Education in Europe, 27(4), 361–363.

    Article  Google Scholar 

  • Michaelis, P. (2004). Education, research and the impact of tuition fees: A simple model of the Jahrbuch für Wirtschaftswissenschaften/Review of Economics Publication Info, University of Augsburg, Department of Economics, paper 265.

  • Racine, J. S., Hart, J., & Li, Q. (2006). Testing the significance of categorical predictor variables in nonparametric regression models. Econometric Reviews, 25(4), 523–544.

    Article  MathSciNet  MATH  Google Scholar 

  • Rauhvargers, A. (2011). Global university rankings and their impact. Brussels: EuropeanUniversity Association.

    Google Scholar 

  • Sadlak, J., Merisotis, J., & Li, N. C. (2008). University rankings: Seeking prestige, raising visibility and embedding quality—the editors’ views. Higher Education in Europe, 33(2–3), 195–199.

    Article  Google Scholar 

  • Sanoff, A. (1998). Rankings are here to stay: Colleges can improve them. Chronicle of Higher Education, 45(2), 96–100.

    Google Scholar 

  • Sowter, B. (2008). The times higher education supplement and Quacquarelli Symonds (THES—QS) world university rankings: New developments in ranking methodology. Higher Education in Europe, 33(2–3), 345–347.

    Article  Google Scholar 

  • Stella, A., & Woodhouse, D. (2006). Ranking of higher education institutions. Occasional Publications Series No: 6, Melbourne, AUQA.

  • Taylor, P., & Braddock, R. (2007). International university ranking systems and the idea of university excellence. Journal of Higher Education Policy and Management, 29(3), 245–260.

    Article  Google Scholar 

  • THES. (2005). World university rankings. In Times Higher Education Supplement.

  • Tofallis, C. (2012). A different approach to university rankings. Higher Education, 63(1), 1–18.

    Google Scholar 

  • Tulkens, H. (2007). Ranking universities: How to take better account of diversity. CORE Discussion Paper 2007/42.

  • van der Wende, M. (2008). Rankings and classifications in higher education: A European perspective. Higher Education, 23, 49–71.

  • van der Wende, M., & Westerheijden, D. (2009). Rankings and classifications: The need for a multidimensional approach. Higher Education Management and Policy, 22/3, 71–86.

    Google Scholar 

  • van Raan, A. F. J. (2005). Fatal attraction: Conceptual and methodological problems in the ranking of universities by bibliometric methods. Scientometrics, 62(1), 133–143.

    Article  Google Scholar 

  • van Vught, F. (2007). Diversity and differentiation in higher education systems. Presentation at the conference higher education in the 21st century—diversity of missions, Dublin, 26 June 2007.

  • van Vught, F. A., & Westerheijden, D. F. (2010). Multidimensional ranking: a new transparency tool for higher education and research. The Netherlands: Center for Higher Education Policy Studies (CHEPS), University of Twente.

  • Weber, L. (2006). University autonomy, a necessary, but not sufficient condition for excellence—being a paper presented at IAU/IAUP Presidents’ symposium. Chiang Mai, Thailand, 8–9 December.

  • Webster, J. (2001). A principal component analysis of the U.S. News & World Report tier rankings of colleges and universities. Economics of Education Review, 20, 235–244.

    Article  Google Scholar 

  • Zha, Q. (2009). Diversification or homogenization in higher education: A global allomorphism perspective. Higher Education in Europe, 34(3–4), 459–479.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kristof De Witte.

Rights and permissions

Reprints and permissions

About this article

Cite this article

De Witte, K., Hudrlikova, L. What about excellence in teaching? A benevolent ranking of universities. Scientometrics 96, 337–364 (2013). https://doi.org/10.1007/s11192-013-0971-2

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11192-013-0971-2

Keywords

JEL Classification

Navigation