Keywords

Introduction

A new perspective on the governance of higher education systems is emerging . Worldwide, relationships between governmental authorities and higher education institutions are changing, particularly because of the increasing importance of information about the learning outcomes and the research impacts produced in higher education. Reliable information on the benefits that the various higher education institutions (and their subunits) offer to their students, funders and society in general is key for their legitimacy, their funding and their competitiveness. Transparency about these benefits is an important ingredient in the governance framework in higher education because it contributes to the quality of decision-making and accountability. In turn, accountability is expected to lead to (re-) establishment of “guarded trust” in higher education among societal stakeholders (Kohler 2009). However, information needs a succinct, yet honest presentation, otherwise, it leads to information overload, especially for stakeholders who are not higher education experts. Designing instruments that fulfil these requirements is not a sinecure.

There are several reasons for the growing need for information. First, financial contributions made by students, taxpayers and others to higher education are rising. Second, the increasing number and variety of the providers of higher education and the (degree and non-degree) programmes they offer: public and private (not-for-profit and for-profit), traditional higher education institutions and new (e.g. online) providers, national and international offerings. The growing variety makes it increasingly difficult for (prospective) students to decide where and what to study. Likewise, governments wish to be assured that higher education providers in their jurisdiction continue to deliver the quality education and research services that are needed for its labour market, its businesses, its communities, and so on. Third, today’s network society is increasingly characterized by mass individualization, meaning that a higher education institution’s clients (in particular, its students) demand services that are customized to their needs, plans and abilities. Clients, therefore, constantly seek to assess and evaluate the specifics of the services offered, searching for those products and providers that best meet their specific needs.

The result is an increasing demand for transparency. From the side of students, public authorities and the general public, the need for tools that allow better and broader use of information regarding the services and performances of higher education institutions is growing. Enhancing the transparency of the activities and outcomes of higher education institutions is becoming a central objective of rethinking governance in higher education.

For three decades, several tools have been (re-)designed to increase transparency about quality and relevance of higher education across its missions: education, research, knowledge transfer and community engagement. Some (e.g. accreditation) are policy tools put in place by public authorities, others originate from private initiatives (e.g. rankings produced by media organisations). The European Union, too, supports higher education reform through analysis and “evidence tools” or “transparency tools” (European Commission 2011, 2017). In this chapter, we discuss three higher education transparency tools: accreditation, rankings and performance contracts. We present these tools in the broader context of higher education governance and policy-making, and we analyse how they are reshaped to address the growing need for more transparency in higher education.

Information Asymmetry

The basic theoretical notion underlying the increasing interest in transparency in higher education stems from an (economic) understanding of higher education as an experience good. An experience good is a good or service whose quality can only be judged after consuming it. This contrasts with the textbook case of “search goods”, whose quality can be judged by consumers in advance. Experience goods are typically purchased based upon reputation and recommendation since physical examination of the good is of little use in evaluating its quality. It might even be argued that higher education is a credence good: a product, such as doctors’ consults and vitamins, whose utility consumers do not know even after consumption (Bonroy and Constantatos 2008; Dulleck and Kerschbamer 2006). The value of credence goods is largely a matter of trust. Moreover, the “production” of higher education takes place in the interaction between teacher (or e.g. an online learning platform) and learner or student. Whether students after graduation really know how good teaching has been in enhancing their knowledge, skills and other competencies is subject to debate. Anyhow, we may safely assume that higher education clients cannot know its quality in advance (van Vught et al. 2012). Higher education’s being an experience or credence good underpins the importance of trust.

Looking at it from the perspective of the provider, academics (as teachers) may argue that they know better than any other stakeholder what it takes to deliver high-quality higher education; and surely, they have a case. At the same time, this view implicitly perpetuates—and justifies—information asymmetry between client and provider. According to the principal–agent theory, information asymmetry might tempt academics and higher education institutions not to maximise the quality of their education services. For instance, universities might—and do—exploit information asymmetries to cross-subsidize research activity using resources intended for teaching (James 1990), e.g. tuition fees.

In principal–agent theory, several means are considered to protect clients and society against abuse of information asymmetries. Broadly, the means can be categorised as either aiming to limit the agents’ behaviour to what is desirable, for instance through regulation, through contracts that guarantee that the expected quality in all its dimensions will be provided, or through alleviating the information symmetry (Winston 1999). All three categories can be found in higher education. Some of the policy tools in practice combine aspects of affecting the behaviour and of increasing transparency.

Regulation of behaviour—by governments or by the providers themselves—may involve rules on service quality, standards for teaching, qualifications frameworks, quality assurance requirements, or conditions imposed on providers. Alternatively, incentives may be devised to reward desirable behaviour and sanction undesirable behaviour; performance contracts agreed between principal and agent belong to this category. Besides, regulation may aim to alleviate the information asymmetry by focusing on the provision of information, i.e. on transparency tools. In the absence of objective information about the quality of higher education, proxies must be used. Signalling or labelling is a common proxy; the experience of current or previous clients is another. Accreditation, quality assessment, student guides and listings of recognized providers are some obvious examples in the area of higher education consumer protection. Implementing tools such as monitoring, screening, signalling and selection may be initiated by the government, but may also take place through agencies acting independently of the government or created by the providers themselves.

The emergence of new or redesigned approaches to focus higher education providers on producing value for society signals a new approach to the governance of higher education. For better understanding the role and functioning of these tools, we first turn to the emergence of networked governance, this recent perspective on higher education governance.

Networked Governance

Many governments, because of the increasing complexity of higher education systems and their expanding array of functions, are neither capable nor willing to exert centralized control over higher education. They acknowledge, moreover, that local diversities exist among higher education institutions and realise that these providers must have regard for the needs of their own stakeholders and local clienteles in contexts ranging from rural areas to metropolises, and with varying connections to the globalised knowledge economy. Accordingly, governments are seeking new governance approaches that allow higher education institutions to refine and adapt national policies to reflect those differences of locality, mission, etc. Moreover, some governments seek to empower students and external stakeholders to exert more influence over higher education institutions while other governments continue to rely on more top-down regulation. Yet, other authorities look for smart governance approaches that combine vertical steering (traditional public administration) with elements of market-type mechanisms (new public management).

Recognising the diversity of needs and approaches, the concept of networked governance was developed (Stoker 2006), which combines a “state supervisory government” model—promising increased autonomy for higher education institutions—with a new focus on (local) clients. In this emerging governance approach, higher education institutions negotiate with their local network consisting of stakeholders (including students, local stakeholders, government authorities, and so on) the services they will provide. At the same time, all higher education institutions constitute a network in which they act partly autonomously, partly collectively and partly in response to the coordinating centralised “broker”, i.e. the governmental authority (Jones et al. 1997; Provan and Kenis 2007). Networked governance emerged out of the New Public Management (NPM) paradigm of the 1980s and 1990s. It widened the perspective from NPM’s focus on efficiency and effectiveness to include public values such as social equity, societal impact (relevance, producing value from knowledge) and addressing the diverse needs of the large variety of clienteles. Networked governance also relies on negotiation, collaboration and partnerships, much less on NPM’s uniform one-size-fits-all, centralised approach. The focus lies on co-creation of education and research by higher education institutions together with their relevant stakeholders while keeping an eye on individual needs and solutions of clients (Benington and Moore 2011; Stoker 2006).

Government remains a key actor in this governance model. The “supervisory government” wants to be assured that national interests are served and clients’ (in particular: students’) interests are protected. This implies some limitations on the autonomy of higher education institutions, as well as renewed demands for accountability. Government also demands transparency, it being a precondition for accountability, allowing negotiations and the build-up of public trust in higher education.

Accreditation

We begin our discussion of transparency tools with the oldest tool of this kind in higher education. Currently, accreditation is, probably, the most common form of external quality assurance in higher education. In the 1980s and 1990s, accreditation was—from our perspective of transparency—an effort to create and disseminate information on the quality of higher education. The distinguishing characteristic of accreditation is that external quality assessment leads to a summary judgment (pass/fail, or graded) that has consequences for the official status of the institution or programme. Often, accreditation is a condition for recognition of degrees and their public funding. Accreditation is the simplest and, therefore, prima facie most transparent form that quality assurance can take. However, the transparency function of quality assurance is an additional aim—its primary aim is to assure that quality standards are met.

When accreditation and other forms of external quality assurance were introduced in governance relations in Western higher education systems (that is: since the 1950s in the USA and around 1985 in Europe), their focus was on what higher education institutions were offering, measured by input indicators such as numbers and qualifications of teaching staff, size of libraries, or staff–student ratios. Study programme managers had to describe the curriculum and—in modern parlance—intended learning outcomes. Such input indicators could relatively easily be collected from existing administrative sources. However, the relevance of input indicators for making the quality of the teaching and learning experience (i.e. the teaching and learning process) more transparent, or for exposing the quality of outputs (e.g. degree completions) and outcomes (e.g. graduate employment, or continuation to advanced study) was questioned. Subsequently, various adaptations to accreditation have been introduced.

In Europe as well as in the USA, and in line with New Public Management, governments increasingly wanted to know about outputs and outcomes, stressing value for money and the wish to protect consumers’ (students’) rights to good education. Increasingly, therefore, accreditation standards began to include measures of institutional educational performance, such as drop out or time-to-degree indicators. From the mid-1980s onwards, in the USA, this movement led to coupling accreditation with student assessment (Lubinescu et al. 2001) while in Europe parallel developments ensued especially since the articulation of the European Standards and Guidelines for Quality Assurance (European Association for Quality Assurance in Higher Education 2005; European Association for Quality Assurance in Higher Education et al. 2015). From a governmental, accountability perspective, the focus was mostly on graduation rates (or their complement: drop-out rates), and in the USA also on students’ loan default (since graduates who cannot pay back their federal loans pose a financial risk to the government).

As a recent result, after many years of debate about the conservatism and lack of pertinence of accreditation in the USA, and following incremental policy changes, in 2015 the so-called Bennet-Rubio Bill was proposed (reintroduced in 2017) to focus accreditation on outcomes-based quality reviews, with a focus on demonstrating—presumably also to the public—measures of student learning, completion and return on investment.Footnote 1

In several European countries (e.g. Sweden and the Netherlands), the focus of accreditation has recently emphasised achieved learning outcomes. The degree to which study programmes succeed in making students learn what the curriculum intends to teach is assumed to present a more transparent, more pertinent, and more locally-differentiated picture of quality. However, prospective students derive little information from the accreditation status of a study programme, as it is a binary piece of information. Additionally, some academics regard this approach as an infringement of their academic freedom rather than as aiding quality enhancement. The emphasis on achieved learning outcomes redirects accreditation more towards the diversified information needs of students, i.e. more on higher education’s public value and intends to enhance transparency. Still, the additional effort needed to assess achieved learning outcomes may produce better and more useful information, i.e. higher levels of transparency. However, this is only the case if the assessment of learning outcomes at the programme level is comparative in nature, preferably on an international scale, and the results are made public. Today’s global order in higher education is leading to huge information asymmetry challenges, which necessitate an international, comparative assessment of students’ learning outcomes based on valid and reliable learning metrics (Van Damme 2015).

The recent move in several European countries, including e.g. Germany, towards institution-level accreditation reduces transparency for clients and increases again the information asymmetry in favour of higher education providers unless other arrangements ensure publication of programme-level quality information.

Admittedly, whether students are interested in measures of achieved learning is another matter. Even if students behave as rationally as policy would have it, they would not only be interested in outcomes in the distant (uncertain) future but also in characteristics of the educational process and its context. In other words, there are good reasons for students’ interest in matters of education delivery, methods and technologies of teaching, intensity of teaching, teaching staff quality, numbers and accessibility of education facilities, availability of educational support and so on. Students (and others) will most likely also be interested in the current students’ satisfaction with such factors, allowing them to benchmark satisfaction scores across different institutions and thus to make proxy assessments of course quality. However, in accreditation systems, such information is often hard to find. Unlocking this information is one of the challenges in further redesigning accreditation mechanisms towards stronger transparency tools. Various semi-public and private information websites have been developed since about 20 years to do just this, e.g. the “Die Zeit” ranking in Germany, or Studychoice123 in the Netherlands. The UK’s recent teaching excellence framework (TEF) leads to similar information. The German and Dutch approaches rely on detailed, multi-dimensional information while the UK approach is to simplify all the information into three ratings (bronze, silver or gold provision). There is a trade-off between prima facie transparency for the masses (UK) and in-depth information for an interested audience (Germany and the Netherlands).

Meanwhile, allowing cross-institutional comparisons based on student satisfaction scores and student outcomes is also one of the objectives potentially addressed by university rankings.

Rankings

Whereas quality assurance and accreditation were introduced as transparency instruments mainly on the initiative of governments (Brennan and Shah 2000), university rankings have appeared mostly through private (media) initiatives. Rankings emerged in reaction to the binary (pass/fail recognition) information resulting from accreditation. They intend to address a need for more fine-grained distinctions in a context where many institutions and programmes pass the basic accreditation threshold.

Rankings, in this way, may assist students in making choices. They can be helpful to potential customers of higher education institutions as well as to policymakers and politicians. In addition, they offer snap-shot pictures of the performance of higher education institutions. Such apparently prima facie understandable league tables appear to be attractive to the general public.

It is widely recognized that, although current global rankings such as the Times Higher Education, QS or Shanghai rankings are controversial, they are here to stay and that especially global university league tables have a considerable impact on decision-makers worldwide, including those in higher education institutions (Hazelkorn 2011). Rankings reflect the increased international competition among universities and countries for talent and resources; simultaneously, they reinforce that competition. On the positive side, they urge decision-makers to think bigger and set the bar higher, especially in the research universities that heavily feature in the current global league tables. Yet, major concerns persist about the rankings’ methodological underpinnings and their drive towards stratification rather than diversification.

The rankings that first appeared in the USA and later on elsewhere in the world have received much criticism (Dill 2009; Hazelkorn 2011). We distinguish the following sets of problems surrounding the familiar global rankings (Federkeil et al. 2012). First, traditional university rankings do not distinguish their various users’ different information needs but provide a single, fixed ranking for all. Second, they ignore intra-institutional diversity, presenting higher education institutions as a whole, while research and education are “produced” in faculties, hospitals and laboratories, etc., which each may exhibit quite different qualities. Third, rankings tend to use available information on a narrow set of dimensions only, overemphasizing research. This suggests to lay users that more and more frequently cited research publications reflect better education. Fourth, the bibliometric databases used for the underlying information on research output and impact on peer researchers (mostly World of Science and Scopus) mostly contain journal articles, while journal articles are a type of scientific communication that is relevant for many natural science and medicine disciplines, but less so for areas like engineering, humanities and social sciences. Moreover, the journals covered in these databases are mostly English-language journals, largely disregarding other languages. Fifth, the diverse types of information and indicators that underlie rankings are weighted by the ranking producers and lumped into a single composite value for each university. This is done without any explicit—let alone empirically corroborated—theory on the relative importance and priorities of the indicators. Changing the ranking methodology—not uncommon in some rankings—produces different scores for higher education institutions even though their actual performance does not change. Sixth, the composite indicator value is converted to a position in a league table, suggesting that #1 is better than #2 and that #41 is better than #42; thus, “random fluctuations may be misinterpreted as real differences” (Müller-Böling and Federkeil 2007).

Given these criticisms, some analysts (including this chapter’s authors) have endeavoured to construct alternative rankings and in recent years—partly due to these efforts—not only innovative rankings have appeared but also the methodology of traditional global rankings has improved: information on individual areas (fields, disciplines) was added to the global rankings and the dimensions of the data included were broadened.

In particular, U-Multirank (van Vught and Ziegele 2012) has addressed the shortcomings of the traditional global rankings. As a transparency tool, this ranking is very much in line with a more networked governance approach. Firstly, because U-Multirank takes a multi-dimensional view of university performance; when comparing higher education institutions, it informs about the separate activities the institution engages in: teaching and learning, research, knowledge transfer, international orientation and regional engagement. Secondly, U-Multirank invites its users to compare institutions with similar profiles, thus enabling comparison on equal terms, rather than “comparing apples with oranges”.Footnote 2 From thereon, it allows users to choose from a menu of performance indicators, without combining indicators into a weighted score or a numbered league table position, giving users the chance to create rankings relevant to their information needs. Thirdly, U-Multirank assigns scores on individual indicators using five broad performance groups (“very good” to “weak”) to compensate for imperfect comparability of information internationally. Finally, U-Multirank complements institutional information pertinent to the whole institution with a large set of subject (field-based) performance profiles, focusing on particular academic disciplines or groups of programmes, using indicators specifically relevant to the separate subjects (e.g. laboratories in experimental sciences, internships in professional areas). Whereas transparency on individual fields is particularly important to, e.g., students looking for an institution that offers the subject they want to study, other users (such as university presidents, researchers, policymakers, businesses and alumni) may be interested in information about the performance of institutions as a whole.

The basic characteristics of U-Multirank empower stakeholders to compensate for their asymmetrical information position vis-à-vis higher education providers. In that sense, it embodies principles of the networked governance model.

Performance Contracts

Performance contracts are agreements between individual higher education institutions and their government(s) or funding authorities that tie (part of) the institution’s public funding to its ambitions in terms of performance.Footnote 3 Performance contracts allow higher education institutions to receive funding in return for their commitment to fulfil several objectives as measured by specific target indicators agreed upon between the relevant governmental authority and the institution (Salmi 2009).

Delivering on the performance contract leads to a financial reward for the institution, thus encouraging it to improve its performance and to be forward-looking. Usually, such contracts invite higher education institutions to elaborate their strategic plans, outlining their vision of the future and the specific actions directed to reaching their strategic objectives. Performance contracts allow institutions to select and negotiate their goals with an eye upon their individual context, strengths and key stakeholders. Thus, the primary aim of performance contracts is to reward the desired behaviour, increasing mission diversity in the higher education system and increasing performance in terms of quality and relevance. Secondarily, largely through their use of indicators, they also seek to increase transparency for the various clients of the institution.

Performance contracts—under several names and in various forms—have been implemented in many countries, such as Australia, Austria, some Canadian provinces, Denmark, Finland, Germany, Hong Kong, Ireland, Japan, the Netherlands, Scotland, and some states of the USA (de Boer et al. 2015; Jongbloed and Vossensteyn 2016). So far in practice, most performance agreements have stressed the accountability and performance dimensions and have not yet played a major role in increasing transparency. However, in some countries, e.g. the Netherlands, Ireland, and Finland, the contracts did have a transparency impact and successfully pointed public attention to the goals that higher education institutions were expected to meet in return for the public funds they received. In the Netherlands, the contracts caused institutions to publish information about their efforts and successes in areas like improving the students’ degree completions (Reviewcommissie Hoger Onderwijs en Onderzoek 2017). Transparency also improved in other areas because the contracts included performances in research and knowledge transfer, as well as how institutions related to their stakeholders or clients. While the second generation of performance contracts in the Netherlands is under debate at the time of writing (2017), probably they will include an increased role for negotiations between higher education institutions and their local or regional stakeholders, thus empowering those stakeholders further while reducing national, homogenising tendencies.

Performance contracts represent the culmination of a negotiation process between university leaders and (governmental) stakeholders to ensure the convergence of strategic institutional goals with national (including regional) policy objectives. As such, performance contracts are an interactive instrument of the networked governance model. In addition, they stimulate higher education institutions to reach out to their own specific clients and stakeholders, thus offering an effective basis for enhanced transparency.

Conclusion

In this chapter, we presented three recently (re-)designed transparency tools for higher education—developed to empower clients and key stakeholders, to strengthen the provision of higher education and to better communicate the various dimensions of quality, performance, and public value to external stakeholders. These tools fit in a more interactive, networked type of governance for higher education. This paradigm explicitly acknowledges the diverse information needs of a wider variety of client groups than just the central government. The networked governance view suggests a combination of horizontal and vertical steering approaches (Jongbloed 2007), limiting to some extent the providers’ autonomy by stressing higher education’s contribution to public values but without reverting to top-down hierarchical steering as in traditional public administration and management models. It recognises that the higher education institutions act in a multi-centric network and that they have their own steering capacity in a collective setting. Yet, the government has a special role to protect and support students and other stakeholders against rent-seeking behaviour and other perverse effects. The orientation in the networked governance paradigm on creating public value acknowledges and tries to rectify information asymmetries between higher education providers, on the one hand, and students, government and other clients and stakeholders, on the other, by encouraging transparency. Sharing information, amongst others using ICT tools such as ranking websites, is a key characteristic of the networked governance. Information sharing increases trust which enables stakeholders to behave more effectively and efficiently in the network (Schwaninger et al. 2017). Establishing more direct, “horizontal” relationships of information sharing between higher education institutions and their regional stakeholders rather than channelling accountability only “vertically” through government strengthens this approach and is intended to create more “face-to-face” relationships; this too should support re-establishing public trust in higher education.

Our conclusions regarding the three transparency tools are as follows. Accreditation remains a crude transparency instrument, providing little information value to clients beyond the basic though crucial protection against substandard provision. The refinement that stresses public value-oriented ideas, namely focusing accreditation on achieved learning outcomes, which would make accreditation more directly relevant to (prospective) students, cannot overcome this basic crudeness. Moreover, designing such apparently more relevant accreditation schemes remains a challenge, given academics’ resistance against their intrusiveness and the efforts needed to design and incorporate sensible indicators of learning outcomes.

Regarding rankings, we have argued that some recent initiatives—in particular U-Multirank—have been designed to overcome the drawbacks of traditional global university rankings. Multi-dimensional, user-driven rankings have the potential to function as rich transparency tools, as client-driven and diversity-oriented instruments. However, such a transparency tool is only as useful as the information it offers to users. Specifically, the geographical scope of institutions in U-Multirank must be extended, and its underlying data on the higher education institutions’ value added in terms of education performance (e.g. learning outcomes, societal engagement of higher education institutions) need further elaboration. This requires close collaboration among higher education researchers, evaluation organisations and rankers with the institutional and external (e.g. national statistics offices) providers of data.

Performance contracts have the potential to contribute to interactive, networked coordination in higher education systems and to increased transparency at system and institutional levels. Their transparency function remains secondary to their performance incentivising function. However, instead of just providing information, they may empower stakeholders to actually influence what higher education institutions do for them. If local stakeholders are given a role in the specification of the contracts (through “horizontal” arrangements), more attention for realising their values may ensue, and the links between higher education institutions and their regions may be strengthened.

This brings us to scale questions of the various transparency tools. The previous sentences intimated how performance contracts might be applied at national or regional levels. In their connection to regional stakeholders’ interests or national political priorities, they do not lend themselves easily to European or international comparability and transparency.

Accreditation is usually defined within a jurisdiction, at least with regard to its consequences such as recognition of degrees and allocation of public funding. As long as states remain the primary sources of legitimacy and funds, the jurisdiction will remain the primary level of consideration for accreditation, even if the operation of accreditation procedures might be outsourced to private-law organisations or quality assurance agencies located in foreign countries—the ministers’ statement in the Yerevan communiqué explicitly wants to stimulate European openness in that respect: “to enable our higher education institutions to use a suitable EQAR registered agency for their external quality assurance process, respecting the national arrangements for the decision making on QA outcomes” (EHEA ministerial conference 2015).

Besides, accreditation is eminently a tool for international transparency. In fact, achieving international visibility and recognition was a major motivation for introducing it in many West-European countries in the years following the Bologna Declaration (Schwarz and Westerheijden 2004). That it was a crude transparency tool and that it only provides superficial comparability internationally may not have been realised in policy circles at the time.

Finally, it was precisely to overcome the drawbacks of accreditation that international rankings emerged, as we detailed above. Rankings may be organised nationally, with national topic foci or national sets of institutions and programmes they compare, but the most debated ones are precisely the global university rankings. National rankings make sense as the large majority of higher education decisions are made within the jurisdiction: on the whole, less than two percent of students in the world are internationally mobile, and the large majority of research funding or commercial contracts with higher education institutions also are made in national frameworks. Yet, for many institutions’ prestige—the major currency in social interactions—the world scale is decisive and global rankings play a major role.

Despite the challenges faced in further developing the networked governance perspective and its accompanying transparency instruments, we have indicated how redesign and redeployment of transparency tools show great potential in this perspective. Transparency lies at the heart of the dynamics in the networked governance of higher education systems. Therefore, working on further improving transparency tools is crucial for increasing the public value of higher education in the years to come.