Introduction

This chapter, in addition to forming the background for the coming chapters in this book, contextualises the Swedish national evaluation and quality assurance (EQA) systems and explicates their national reform contexts.Footnote 1 Higher education is an area that European and global policy efforts have greatly influenced in recent years. The Bologna declaration and the joint work on developing common indicators for assessing quality in higher education are far-reaching examples of such policy work. In parallel with other policy areas, the contemporary education policy strongly promotes the idea of systematic EQA and is part of what Power (1997) has discussed as the “audit society”, Dahler-Larsen (2012) as the “evaluation society”, and Neave (1998) as the “evaluative state”. These European policy efforts have also influenced the Swedish national context. Thus, we recognise the importance of global and European influences (see, e.g. Ozga et al. 2011; Grek and Lindgren 2015), but in this chapter, we concentrate on exploring the relationship between national EQA systems and governing in the Swedish context. This account, we believe, can provide insight also into how changes in other countries’ EQA systems are part of the contemporary governing of higher education.

Over the last few decades in Swedish higher education, various national EQA systems have been decided, developed, and put into effect. Over time, these systems have displayed diverse political purposes and directions and exhibited different designs. The ramifications for the EQA systems that emerged from the policy context and the design of these systems form parts of the complex and comprehensive work of governing. Here, our objective is to provide a historical account and describe and analyse the national EQA systems for higher education, their designs from 1995 to the 2011–2014 system, and their relation to governing. We explore the political process leading up to the 2016 system in the chapter “Hayek and the Red Tape: The Politics of Evaluation and Quality Assurance Reform – From Shortcut Governing to Policy Rerouting” and scrutinise this system in detail in the chapter “Re-launching National Evaluation and Quality Assurance: Expectations and Preparations”.

In this chapter, the following questions directed our analysis and also organised our presentation of the national EQA systems:

  • What is being evaluated? Why? By whom? How? What are the consequences in terms of expectations?

  • What are the implications for higher education governing?

In the following, we present a theoretical frame for our understanding of the different national EQA systems that have evolved over the past 25 years. Thereafter, we describe the major reforms in Swedish higher education as a context for the design and use of the EQA systems. We divide this description into the periods the respective designs were in operation. Finally, we discuss the EQA systems in relation to governing by expectations and inductively based on Dahler-Larsen’s (2012) idea of evaluation machines (see next section).

A Theoretical Approach to Evolving National Evaluation Systems

As stated in the first chapter, we understand governing as a verb and emphasise the work carried out in different ways by different actors in diverse places and spaces and by various means to reach certain aims (Clarke 2015). Policy intentions and aims are expressed in several policy documents and reforms, and for EQA systems, such intentions are also embedded in their designs or “infrastructures of rules” (Fourcade 2010, pp. 571–572). The designs prescribe what is to be done (what is to be evaluated, how, by whom, and for what reasons [what is hoped to be achieved]), orient the attention of those taking part in these processes, and influence their behaviours, activities, and perceptions of their enterprises (Dahler-Larsen 2014; Segerholm 2001).

For the purpose of this chapter, we recognise the dynamic relationship underlying both institutional reproduction and change (Mahoney and Thelen 2010). Change does not necessarily need to encompass the whole institution but can target and influence parts of it. EQA systems may change gradually or more dramatically, and these dynamics hold implications not only for governing but also for our understanding of expectations of what should count as, for instance, “good quality” (Hopmann et al. 2007). Thelen (2003, p. 213) argues that models of path dependency, accompanied by change as rapid and drastic punctuations, need to be complemented by other tools that enable us to account for a more gradual and dynamic relationship, as these processes may be more incremental than usually proposed. Departing from such an understanding of stability versus change, we use a set of concepts that capture this dynamic in different ways.

First, conversion, marking a change originating from within the institution itself when existing frameworks come to be enacted in various ways, resulting in institutions’ reorientation towards new goals or missions. If new rules and procedures that originate from outside the institution are put into place, the concept of displacement refers to both more radical exogenous alterations, as well as slow, incremental processes of change. Institutional layering marks a gradual process of change, involving revisions, additions, and modifications when new elements are added to and put alongside old ones. Each revision may be small, but when placed alongside one another, they accumulate and result in a fundamental change. Institutions also face the risk of drift, erosion stemming from an incapacity to respond to the external context. Another such risk is exhaustion by slow-moving breakdown, resulting in self-destruction from within (Mahoney and Thelen 2010; Thelen 2000, 2003). We use these concepts as analytical descriptors of the evolving national EQA systems.

We relate to the idea of a governing – evaluation – knowledge nexus in that the notion of “evaluation machines” (Dahler-Larsen 2012, pp. 176–182) is used as a basis to discuss the expansion of the EQA systems over time. Evaluation systems may be conceived as evaluation machines, since they are being based on “distinctive epistemological perspectives”, “organisational responsibility”, “permanence”, and “focus on the intended use of evaluations” (Dahler-Larsen 2012; Leeuw and Furubo 2008, pp. 159–160). We will elaborate on the evaluation machine metaphor further in the discussion.

Our account is based on documentary materials, such as government bills, official reports, and descriptions of the EQA systems, by the two national agencies responsible for higher education during the periods. We also use secondary sources such as research reports, articles, and books that describe and analyse the different policy contexts, and EQA systems in higher education in Sweden.

Shifting Policy Contexts: Continuities and Shifts in Evaluation Designs

In this section, we present continuities and shifts in Sweden’s higher education policies, along with the variety of national EQA systems designs that emerges from 1995 to the system decided in 2016. We find that descriptions of the major reforms are contextual information that is necessary for understanding the EQA systems’ designs. The periods slightly extend into each other since the termination of one system overlaps with the introduction of a new one. Before we go into this presentation, we offer a short background on the history of EQA in Sweden.

–1994

External evaluation is a rather new phenomenon within the landscape of Swedish higher education institutions (HEI). From 1477, the formation of the first university in Uppsala, up until the 1960s, institutionalised forms of external evaluation did not exist. According to Gröjer (2004; see also Neave 1998), external evaluation was a response to problems related to the expansion of higher education, and to the transformation from an elite institution to a mass university. From 1950 to 1960, the higher education system expanded from 16,400 students to 37,000, and by 1970, the number had increased to 130,000 (Gröjer 2004, p. 50). This expansion involved not only new groups of students but also new HEIs, staff, programmes, etc., and external evaluation was perceived as a state instrument to retrieve knowledge that could be used to plan, steer, and improve the sector according to contemporary utopian and rationalistic ideas. Notably, the first national agency (Universitetskanslersämbetet, UKÄ), which was installed in 1964 and was responsible for planning and sizing, focused its first evaluations on issues on pedagogical aspects such as improvement in teaching and examination. The 1970s saw the emergence of the idea that evaluation could be used at a national system level in order to control whether national goals were attained. Not only examination frequencies, i.e. output, were used, but also data on the inner workings of HEIs in terms of teaching, examination, students’ previous knowledge, study habits, teachers’ working conditions, etc. were acknowledged. As Gröjer (2004) noted, “effective development could only be achieved if the entire education system was scanned and evaluated” (p. 64). Thus, evaluation originally served the purpose of making the inner world of higher education “visible” to the state. External experts and agency staff who used implicit professional standards and indicators conducted the evaluations, and methods were adapted contextually without any overall agency policy governing the assessments. Information from the evaluations did not follow any explicit official plan but was still distributed to those who were affected (Gröjer 2004). In 1977, a higher education reform informed by ideas on decentralisation and management by objectives called for further national evaluations. The new national agency (Universitets- och högskoleämbetet, UHÄ) engaged independent researchers and started building networks to facilitate exchanges of experiences and ideas. Conferences, seminars, and information were also used to develop evaluation practices.

According to Gröjer (2004), notions of inefficiency within the higher education sector in the 1980s made evaluations increasingly important. Components like site visits, peer reviews, and criteria for international comparisons were implemented and methods developed. However, HEIs still held responsibility for teaching quality and education results, whereas EQA served to control quality at the national level. At the end of the 1980s, the concept of “quality” began to manifest itself within the language of EQA, and the perspectives of new groups of actors including students, student unions, and representatives from the working life (like potential employers) were introduced in the evaluations. The purpose of the evaluations was both to improve and control; however EQA was still based on implicit criteria and a group of directly involved experts who, based on their professional discretion, designed and implemented each evaluation (Gröjer 2004). Gröjer (2004) describes a continuous process of professionalization for the evaluative activities during this period. In this process, specific knowledge was developed, for example, during site visits where actors had the opportunity to learn from each other. Increasingly also, the national agency (UHÄ) argued that the HEIs must use the evaluation results for the purpose of improvement.

1995–2001

The overall aims of the higher education reform of 1993 were to increase the freedom of HEIs, to establish incentives for quality development, and to improve efficiency in the use of resources in HEI activities. This reform also dramatically changed the preconditions for HEIs since the entire state allocation system was altered to a governing system based on economic incentives and productivity (Bauer et al. 1999; Government Bill 1992/1993:1). A performance-based funding system (Herbst 2009) based on a per-student state grant system was introduced, with one sum for each registered student and a larger sum for each student passing the course requirements. This meant that the previous system of applying for state grants in relation to the number of students was abandoned, as well as the applications for funds to install professorships and senior lecturers, that when approved, were granted by royal letters. Another novelty compared to the central planning in the 1960s was local freedom for the HEIs to decide on educational content in their courses and programmes. Specific national/state requirements were however still in operation for professional programmes (e.g. teacher education, physicians). Internal quality assurance (IQA) systems at the HEIs were also introduced as a mandatory requirement along with a demand for obligatory course evaluations (Bauer et al. 1999; Government Bill 1992/1993:1).

The design of the national EQA system during this period, it was argued, was to stimulate internal quality work in order to uphold and enhance quality. A new national agency, the Swedish National Agency for Higher Education (SNAHE, Högskoleverket), was established with the commission to push and control the HEIs’ work with quality issues (Government Bill 1992/1993:1). In line with these motives, the design of the EQA system was directed at the HEIs’ internal quality assurance work, leaving it to the HEIs to decide how this was to be carried out. The SNAHE carried out two cycles of these types of evaluations (SNAHE 1998). Another part of the EQA system was directed towards accreditation for awarding magister degrees. Both these evaluation types were performed in a similar way, by a so-called self-evaluation based on a particular national template, peer review with a site visit carried out by external colleagues, plus a public report. The process as a whole was administered by the SNAHE. All HEIs were evaluated in 3-year cycles. During this period, with the 1993 years reform as a starting point, recurring external control of higher education through a national EQA system was introduced for the first time in Sweden.

2001–2007

No major national quality assurance reform was decided during this period, but some substantial changes were nevertheless made regarding the design of the national EQA system. The system was now said to be a means to guarantee a minimum standard in the education provided, to enhance trust in HEIs, to increase student influence, and to serve students with information so they can make informed choices (Government Bill 1999/2000:28).

The design of the EQA system shifted its focus to be directed to quality in education, that is, evaluation of the quality in academic subject courses and programmes (Government Bill 1999/2000:28; Franke and Nitzler 2008). Another part of the design was to include thematic evaluations, which were directed at, for example, student influence and diversity. Accreditation for rewarding degrees and certificates and of so-called scientific areas (e.g. the right for a university college to establish PhD programmes and award doctoral degrees) was another part. All types of evaluations were carried out in line with a local evaluation model developed in the 1980s by Sigbrit Franke-Wikberg (1990), who at this time was the director general of the SNAHE. The model was adapted to serve a national perspective and consisted of a self-evaluation based on a national template. The template for the subject and programme evaluations asked the responsible department for information about three aspects: preconditions for courses and programmes, the education processes, and the outcomes of the education processes (SNAHE 2001, 2003). The template emphasised preconditions such as number of faculty with PhD, their positions, and number of enrolled students. The three aspects were then to be related to one another in an analysis and an assessment of the education the departments provided. A self-evaluation report was sent to the SNAHE, and the work with it was supposed to engage the whole faculty in an internal discussion of their work. A group of subject/programme experts carried out a peer review, and for the first time, students were part of this external evaluation group. The peer-review group made site visits and conducted interviews with department heads and managers, teachers/researchers, and students. The group produced a written public report in which the SNAHE included their decision on whether the education was assessed to be of sufficient quality. Hence, the reviewers had to provide a cut score that encapsulated their judgement and formed the basis for the SNAHE’s decision. Sanctions were also introduced with this EQA system, meaning that if the department/HEI did not improve, the right to award degrees or certificates could be revoked. A follow-up was therefore performed a year after in cases of a decision of inadequate quality. This happened very rarely (Wahlén 2012), but the entire base for HEIs to provide a certain level of education (course or programme) could be in jeopardy because the state grants regulated in the 1993 reform were (and still are in 2019) coupled to the right to award degrees. This EQA system was run in a 6-year cycle, and the intention was to include all academic subjects and programmes.

During this period, the HEIs had to evaluate their educational quality in line with the national template and had to yield to assessments conducted by an external group of “colleagues” including students. To adapt to these new circumstances and support departments in their work with self-evaluations, many HEIs expanded their administrations with new functions such as “quality officers” at the faculty level and deputy vice-chancellors with education quality as a particular responsibility at the central level (Segerholm and Åström 2007). The number of evaluations that the universities and departments had to engage in increased substantially with this rather extensive system, and there were signs that the previously existing internal evaluation models gave way to the national model and its templates (Segerholm and Åström 2007). There were also signs of evaluation influence, before an actual evaluation process had even started; for example, several signs showed that the attention of the HEIs was directed at what was asked for in the national template, and not what had locally been prioritised previously (Segerholm and Åström 2007).

2007–2011

The major change in preconditions for higher education during this period was the so-called Bologna reform in 2007. The entire structure for Swedish higher education was altered to include three levels (undergraduate, advanced, and graduate) rather than two (undergraduate and graduate). This system introduced a new order for degrees and certificates that required students to achieve learning outcomes to get a degree/certificate. Subsequently, specified learning objectives for all individual courses were now also required. Another novelty was the establishment of the term “subject areas” (i.e. either an academic subject or composite of related academic subjects) compared to the previous, more strict division in academic subjects (e.g. political science, sociology, and psychology). At the end of this period, the government decided on a new national agency that should exclusively supervise and evaluate higher education, the Swedish Higher Education Authority (SHEA, Universitetskanslersämbetet).

The design of the national EQA system stayed rather much the same but with some stress on the relationship between the learning objectives (i.e. the requirements for passing an individual course) and the learning outcomes (i.e. the requirements for a degree/certificate) (SNAHE 2007). Evaluations of the IQA systems at the HEIs were reintroduced and followed the European Association for Quality Assurance in Higher Education’s (ENQA) recommendations (Ministry of Education 2009; SNAHE 2007). The subject and programme evaluations were to be proportionate based on a simplified national self-evaluation template. Those who seemed to live up to the quality requirements were to be evaluated less extensively (i.e. no site visits). The introduction of rewards (a distinction for an eminent educational environment) to departments that delivered “good quality” education was a new feature in this period (ibid.).

The design of the EQA system more or less emphasised the constant presence of external control but also directed some attention to the general idea of the relationship between (learning) objectives and outcomes. This system also represents a mix of sticks and carrots (i.e. threats to withdraw degrees and quality rewards) as a way to stimulate, or force, the HEIs to adapt to the new conditions.

2011–2014

Swedish higher education in the last period covered in this chapter was influenced by all previous reforms, which were layered on top of each other since none of them had been dramatically challenged or altered (Thelen 2003). There was a kind of incremental process towards a higher education system that became more and more characterised by New Public Management (see Pollitt 1995, Pollitt and Bouckeart 2017). As an additional step in this direction, the government decided on what has come to be called the “autonomy reform” in 2010 (Government Bill 2009/2010:149). This reform concerned local freedom for the HEIs to organise internally, make decisions on types of positions and requirements for employment, and allocate resources internally at their own discretion. Just before this autonomy reform, the government, after a tense conflict with the SNAHE, decided to reform the design of the national EQA system (Government Bill 2009/2010:139). Its results-oriented design was thereafter severely critiqued (see, e.g. Kettis and Lindberg-Sand 2013); we will return to some of the reasons for this in the coming chapters. The SHEA’s membership in the ENQA was revoked because this system did not fully adhere to the ENQA statutes, and the system was finally terminated. The last evaluations in the system were carried out in spring 2014, and the final public reports were published in 2016.

When the design of this EQA system was decided upon, it was justified in the policy texts by the need to increase quality in higher education (Government Bill 2009/2010:139). Sweden, it was argued, also needed to strengthen its international position in the global economy and education market. A third motive was the need to clarify education quality in relation to the students and to the society at large. As in the 2007–2011 system, it was quality in education, that is, in subject areas (the new term introduced earlier) that should be evaluated. Also, as in previous systems, accreditation for the right to award degrees and certificates was part of the design. The dramatic change concerned how quality in education should be evaluated: from a model where the relations between preconditions for education, the process, and the outcomes/results were the basis for assessing quality, to a new evaluation model decided by the government. (This decision was in itself quite unique, because the responsible national authority normally makes such detailed decisions.) This design was product oriented (Franke-Wikberg and Lundgren 1980, 1981; House 1978) and mainly directed to assess student outcomes as measured through the indicator of students’ independent projects (in the social sciences often limited empirically based studies presented as a small thesis/report). As before, a mandatory self-evaluation, in line with a national template asked for student grades, share that passed, etc. Quality assessment was delegated to an expert panel of peers, students, and representatives from areas external to the HEIs such as private companies or from the public sector. A sample of students’ independent projects for the bachelor, magister, and master degrees, the self-evaluation, and video interviews with department representatives, teachers, and students (instead of site visits) formed the basis for the assessments (SNAHE 2010, 2012, 2013). The external panel produced a public report including the SHEA’s decision. If the SHEA decided that the quality was insufficient, a plan for improvement was requested and a follow-up conducted. Sanctions were the same as before, but the carrots were resource allocation by state grants partly related to the assessment of quality (Government Bill 2009/2010:139). The political focus on “quality” during this period can be observed in the government bill (Government Bill 2009/2010:139) in which the design was proposed: The term “quality” alone or in connection with other terms appears eight times per page as a mean (Segerholm 2010), without being given any substantial meaning apart from student outcomes as measured by assessing students’ independent projects for the bachelor, magister, and master degrees.

As we will see in the chapter “Hayek and the Red Tape: The Politics of Evaluation and Quality Assurance Reform – From Shortcut Governing to Policy Rerouting”, the design of the national EQA system in this period marks a turn in evaluation ideology in two important ways. Firstly, the system is deliberately based on ideas regarding autonomy in the sense that it steers evaluation away from preconditions and education processes in an attempt to not interfere with the internal work of HEIs. Secondly, it involved a mode of governing by evaluation in which the government, by rather harsh means, forced a totally different model on the HEIs – a design based basically on the relationship between expected learning outcomes for a particular degree and student outcomes as measured by assessing students’ independent projects. The stress was clearly on what has been labelled as evaluation of effects (results), which were quite independent of education and/or learning process and preconditions. There are examples, however, where the self-evaluation reports were given more weight in the evaluations. This slightly increased stress on self-evaluations was something that occurred during the period, much as a response to the criticism from the HEIs, as we understand it. The emphasis on evaluation of effects is also visible in the composition of the external panels, where HEI external representatives (or potential employers) were now for the first time acting as evaluators.

Discussion

In this chapter, we have described the major reforms in Swedish higher education and how the national EQA systems have been designed, used, and influenced HEIs. We have noted reproduction as well as change in the relatively short history of EQA in Sweden. Many components of systems in terms of design (e.g. peer reviews, self-evaluations, site visits, public reports) are introduced in the 1980s and 1990s as EQA was amplified on a broad national scale, and these were later complemented by additional components (e.g. thematic evaluations, accreditation) that were combined in different ways over time producing new aggregate system designs. We thus identify an overall process of expansion and change in the comprehensiveness of the various EQA system designs over time. The designs developed from a rather limited scope in direction as late as the 1990s, including evaluation of the HEIs’ IQA systems and accreditation, over a very comprehensive design including at a minimum quality evaluations, thematic evaluations, and accreditation, to some intermediate state in the 2011–2014 period. The result of these developments in which the designs of the different EQA systems are layered over one another, mixing and blending new parts with the old and, when combined, results in a reorientation (Mahoney and Thelen 2010).

Moreover, ways of organising designs by way of technologies like visibility, comparability, economic rewards, and sanctions that foster and trigger certain modes of behaviour have been added over time. Such technologies are productive from an organisational perspective in the sense that they make things happen, and they also have a radical impact on the ways individual components are employed. For example, if the quality of HEIs is to be comparable on the basis of public reports, these must be reliable and comparable to all others of a similar kind. A public report that is part of an EQA system and explicitly serves the purpose of comparability thus puts increasing demands on methods and ways of writing in terms of validity and reliability. Such issues related to changes in EQA designs and the implications of such changes on practices and behaviours will be explored more in the upcoming chapters of the book.

Governing by Changing Designs

The issue of change is interesting in different ways. First of all, changes in national EQA systems do not come about by themselves; each reform involves a political process and agency work, to plan, design, and implement new system parts and ways of organising them. Change thus produces fundamental shifts in the work of actors directly involved in evaluations. Our results therefore raise questions about whether the governing potential in the EQA systems in the Swedish case partly relies on the shifts themselves. By constantly changing the systems, expectations are also changed and form one important part of the work of governing. While change in EQA systems serves to produce change (i.e. improvement) within HEIs as new aspects of practices are evaluated over time, this change arguably also produce social acceleration (Rosa 2013) since each new national system brings about time-consuming efforts within the HEIs in terms of interpretation and translation (Ball et al. 2012) – efforts that are not only related to concrete practices of dealing with evaluations but also related to core activities such as organisation, planning, teaching, and examination.

Overall, our account confirms Hopmann’s (2008) thesis that higher education over the last two decades has moved from being an internally managed “ill-defined problem” (evaluated by professionals themselves who needed leeway to define their own practice) to a “well-defined problem” that is managed and controlled by external (and internal) “expertise” by way of using indicators and standards. According to Hopmann (2008): “Expectation management changes [higher education] dramatically. The core focus shifts to more or less well-defined expectations of what has to be achieved by whom” (p. 424).

Although we recognise that European (and global) EQA policy interacts with the Sweden’s national policy, the results also show that references regarding EQA design are formed rather endogenously through the social and institutional contexts in which the interactions are established. One observation is that the overall picture of the changes in the national EQA systems in terms of Mahoney and Thelen (2010; Thelen 2000, 2003) can be identified as institutional layering. However, over time there is also an ongoing process of displacement that has changed the entire direction of Swedish higher education through the different designs of EQA systems and particularly the various kinds of evaluation models. Displacement here involves fundamental change through more active interventions in prior arrangements in terms of democratic ideals and the creation of new market-oriented alternatives in their place. Overall, this development implied a relative state suspension of ambitions to guarantee equivalence within the higher education sector. Instead, students were increasingly seen as consumers, and diversification in terms of education quality was seen as a problem that could be targeted through competition. This displacement is particularly visible in the 2011–2014 system, in which increased stress was put on providing information to students to facilitate informed choices, informing the society at large of the accomplishments of higher education in general (accountability), and on including representatives of potential employers. This system was imposed on the HEIs, and traditional ideas on evaluation were displaced in favour of new components and technologies associated with new behavioural logics – overall, an alien approach to the higher education sector at that time.

A second observation is that the change in evaluation models displays a successive advancement of comparability among HEIs. Comparability as a technology hence serves the purpose of establishing common standards and agreements and organise the HEI sector within an international and national market. Taken together, external demands have increasingly, and bit-by-bit, resulted in a displacement, where HEIs become more consumer oriented, a movement provoked by the evaluation exercises. In these processes, HEIs need to precisely spell out what it is the students need to learn and also the HEIs’ degree of success in that respect (a declaration in advance of the quality of the service the student is “buying”, similar to custom charters.) Rider et al. (2013) describe the profound impacts of universities’ transformation from public and democratic institutions into marketised networks. These changes in the higher education system are similar to some of the changes observed in education more generally in Sweden and elsewhere (see, e.g. Ozga et al. 2011).

Governing by Expectations

When considering the expectations these different EQA systems may give rise to and the governing role they fill, we would like to emphasise the following:

After the 1993 reform, particularly from 1995 and onwards, there is a constant presence of some kind of national evaluation of higher education. This constant presence of external control also leads to the expectation for it to be there and to continue to be present. In turn, this is part of making higher education “auditable” according to Power (1996) or evaluable (see also Sahlin-Andersson 1995).

In the later periods, when the designs of the EQA systems first included sanctions, and then rewards, the HEIs also developed expectations of such sticks and carrots. The possibility of having the right to award degrees and certificates revoked makes the HEIs also expect such sanctions to be used (which they also are, albeit rarely). The consequences of expectations of such high-stakes evaluations are well known in educational research, particularly concerning widespread testing, where phenomena such as teach-to-the-test and window-dressing are developed (see, e.g. Linn 2000). Compliance is another consequence that easily makes educational considerations give way to juridical or managerial ones in order to avoid criticism (Lindgren et al. 2012; Solbrekke and Englund 2011).

From the implementation of the Bologna system, with Sweden’s stress on a rationale based on objectives and results, particularly manifested in the design of the 2011–2014 EQA system, and its emphasis on student outcomes and attainment, the state expects the HEIs to deliver students who produce independent projects that are assessed as good enough. Consequences of this that we know include, for example, changes in resource allocation so that supervisors in courses for independent projects get more time for supervision and teachers in other courses get less time teaching (cf. Sørenssen and Mejlgaard 2014, pp. 26–27). Overall, such strategic responses to EQA raise critical questions. Do the designs of national EQA systems provoke the desire to improve/comply but take away/distort the performance? Hence, as pointed out by Hopmann (2008): “only those results which can be ‘verified’ according to the stakes given and do not meet expectations become problematic, and only those outcomes which meet the pre-defined criteria are considered a success” (p. 424).

As noted above, accreditation has been part of all EQA systems and has stayed much the same over the different periods – an important continuity to acknowledge despite certain and other simultaneous and ongoing processes of change. The different national agencies (the SNAHE and the SHEA) more or less had the same expectations for what the HEIs have to show in order to get permission to offer PhD programmes or to get the right to award degrees and certificates. This leads to stability in what the HEIs expect these accreditation processes to direct attention to, and that is of foremost attention to certain preconditions, such as share of teachers with a doctorate. One known consequence of this, however, is that HEIs try more intensely to increase their shares of faculty with a doctoral degree when they apply for the right to start a PhD programme. In general terms, the consequences of these reciprocal expectations are that the HEIs direct resources to live up to the different requirements in the evaluations.

The successive additions in the external panels of students, and of future employers in the 2011–2014 system, teach the HEIs, and develop expectations that parts of society outside the higher education sector, have legitimate interests in the scrutiny of higher education. Expectations are also raised about them having knowledge and competence enough to evaluate higher education. Extended influence by external stakeholders is by no means a Swedish higher education phenomenon, as Deem et al. (2007) and Magalhães et al. (2018) show in their studies of managerialism and higher education governing boards.

The state, on the other hand, expects the HEIs to accept all sorts of evaluators and also expects the HEIs to acknowledge that the expertise of the external panel is sufficient when it comes to higher education. A plausible consequence of these last two sets of expectations is a shift in the mindset of HEI managers and teachers/researchers to be more receptive to external demands on the direction of their work, that is, to make higher education and research better adapted to market needs. For example, this may mean increased efforts to produce more “useful” (applied) education and research. This is an example of what Dahler-Larsen (2012) labels constitutive effects, pointing to the potential of evaluations to influence not only behaviours but also our perceptions of the phenomenon/activity/programme that is being evaluated.

The final kind of expectation we bring forward is based on the descriptions of the different designs of the national EQA systems. This expectation suggests that the shifts in designs themselves make the HEIs to expect changes. A consequence of that is that it has become necessary for the HEIs to always keep an eye on national policy developments and on what is required of them. They thus accept constant change, constant pressure, and constant control and must be on alert, thereby possibly avoiding the risk of drift (Mahoney and Thelen 2010). Depending on the more detailed shifting but also stable requirements in the different designs of the EQA systems, governing by expectations is both about what the HEIs expect the state (the national agencies) to do and what the state and national agencies (through decisions and policy) expect to happen at the HEIs.

Building an Evaluation Machinery?

Overall, we contend that the historical process of establishing national EQA systems in Sweden shows resemblance to what Dahler-Larsen (2012) describes in terms of evaluation machines. In this book, we use the evaluation machine analogy in our explorations of evaluation as a practice in governing the Swedish higher education case. As shown in this chapter, the EQA systems change constantly, leading us to use the notion of an “evaluation machinery” to denote the assemblage of elements that we have identified during the covered period. We equal an evaluation machinery with Dahler-Larsen’s characterisation of evaluation machines as an ideal typical concept that draws attention to development within the audit society, where evaluation has become institutionalised and professionalised so that “arbitrariness and subjectivity” are eliminated (Dahler-Larsen 2012, p. 176). They are:

[m]andatory procedures for automated and detailed surveillance that give an overview of organizational activities by means of documentation and intense data concentration. (Dahler-Larsen 2012, p. 176)

Similar to evaluation machines, the Swedish national EQA machinery has become permanent and repetitive over time and functions as a producer of “streams of information” rather than occasional reports (Dahler-Larsen 2012, p. 177). It has become increasingly embedded in HEIs organisational procedures of verification and resource allocation; EQA is thus framed by ideas of “organizational responsibility” (Dahler-Larsen 2012, p. 177). As such EQA has also become a prospective rather than just a summative form of evaluation. Broad scales of activities related to EQA are “planned in advance so they can be intentionally linked to decision and implementation process” (Dahler-Larsen 2012, p. 177). EQA is hence increasingly reciprocal and has become a natural condition for HEIs. Over time, EQA has become based on “distinctive epistemological perspectives” and increasingly “relies on a number of tools or scripts such as definitions, indicators, handbooks, procedures, guidelines, etc., to support fairly standardized operationalisations” (Dahler-Larsen 2012, pp. 177–178). Finally, as an evaluation machinery, EQA cover “phenomena that have broad scope in time and space” (Dahler-Larsen 2012, p. 178). Higher education involves extensive and complex activities that are detailed “in a systematic and integrated way that permits comparison among areas of activities” (Dahler-Larsen 2012, p. 178).

Finally

The notion of an evaluation machinery harbours a range of aspects that will be explored further in the upcoming chapters of the book. This notion draws attention to the role of documentation and specific forms of documentation in terms of self-evaluations that have become institutionalised over time. An evaluation machinery also require distinctive roles and knowledge for their functioning. They must be designed, engineered, and operated (Dahler-Larsen 2012). For example, what are the implications of increasing demands on external assessment panels and site visits in terms of forms of knowledge, expertise, experience, and social competence?

The issue of constitutive effects (Dahler-Larsen 2012) will also be pursued as an important theme in the book. In some of the following chapters, we will look more closely into this and explore some of the above described national EQA systems and processes, their consequences, and the way they influence and govern higher education.

In the next chapter, however, we turn to the wider international context and situate the Swedish example within Europe in order to understand development in higher education policy and EQA systems.