Keywords

1 Introduction

In response to laments about administrative burdens and ‘reform fatigue’, many university leaders have called for a prioritization of ‘internal quality assurance’ over ‘external quality control’.Footnote 1 Already since 2003, the European University Association (EUA) has promoted “a coherent quality assurance (QA) policy for Europe, based on the belief: that institutional autonomy creates and requires responsibility, that universities are responsible for developing internal quality cultures and that progress at European level involving all stakeholders is a necessary next step” (EUA 2003, 9). Indeed, it would be strange if universities would not take this responsibility, “since quality management, at least theoretically, can have potential academic benefits” (Pratasavitskayaa and Stensaker 2010, 3).

Underlying this idea is an implicit assumption that QA is in the best interest of universities because it fosters the development of procedures and mechanisms meant to ensure that “quality, however defined and measured, is delivered” to the stakeholders (Harvey and Green 1993, 19). By setting up QA processes, universities would show the larger public that quality in general and quality improvement in particular is an ongoing concern for the governance of higher education institutions. Moreover, individual academics would continuously try to improve their scientific work and teaching, in line with the needs of employers and students.

In this light, an interesting empirical question is to ask why most universities—and indeed most professionals in higher education—do not internalize quality assurance. In fact, we can find quite a lot of resistance against this practice, both in the academic literature and in practice (Apple 2005; Ball 2003). In Romania, the topic has caught the attention of some scholarly debate, as universities are generally considered to fail to internalize quality assurance (Păunescu et al. 2012). The present paper asks why this is the case; in other words: why do Romanian universities not internalize quality assurance?

We address the question by drawing up five different hypotheses as to why quality assurance is not internalized in Romanian universities. The hypotheses are taken from the public policy literature as well as the literature on post-communist transitions. They are then tested on empirical data consisting of national policy documents on quality assurance and 187 semi-structured interviews with around 327 people (managers, faculty members, administrators and students) in 5 universities. After an analysis of the evidence, we argue that there are top-down problems with the internalization of quality assurance, caused by ambiguous and inconsistent national regulations focused on multilayered evaluation procedures. At the same time, problems arise from the interpretation of quality assurance at lower levels of decision-making. These hypotheses are then used to construct a narrative of why Romanian universities fail to internalize quality assurance.

The paper proceeds as follows. It starts with a short background of the history of QA in Romanian higher education, with an emphasis on difficulties encountered. In order to explain problems in the internalization of QA, we then provide some conceptual clarifications on the notion of ‘quality assurance’ and the differences between its ‘internal’ and ‘external’ variants. Next, we advance five hypotheses for the failure to internalize QA in Romanian higher education institutions. After presenting our research design, we put forward the analysis of our empirical data and discuss its implications.

2 Internal Quality Assurance in Romanian Universities—a Mere Formality?

In the Romanian higher education system, QA exists as such since 2005, when the government passed an Emergency Ordinance to comply with the ‘European Standards and Guidelines on Quality Assurance in Higher Education’ (2005). Before this date, the idea of quality management was limited to the accreditation of higher education institutions, regulated since 1993 in order to tackle the mushrooming of the private sector—a common phenomenon in post-communist countries (Scott 2002). Throughout the 1990s, a National Council for Academic Evaluation and Accreditation (CNEEA) was appointed by the Ministry of Education to run the accreditation process, and focused on staffing, infrastructure, management and administration capacities (Păunescu et al. 2012, 317). The 2005 legislation created a new autonomous public institution—the Romanian Agency for Quality Assurance in Higher Education (ARACIS)—that took over the accreditation process and was entrusted with responsibilities in the authorization of study programs and external quality assurance more broadly. The law explicitly distinguished internal from external quality assurance, and specific provisions focused on external evaluations—defined as “multi-criteria examinations of the extent to which a higher education institution fulfills the reference standards” (Emergency Ordinance 75/2005, Art 3[2]). Accordingly, quality was to be ‘assured’ through “a set of activities meant to develop the capacity of universities to elaborate, plan and implement study programs, thus gaining beneficiaries’ trust that the institution is maintaining quality standards” (Ibid, Art 3[3]). More importantly however, external quality assurance was linked with the accreditation of universities, upon which ARACIS was to decide.

Since universities depended on ARACIS for their legal survival, they formally complied with external requirements for quality assurance without necessarily developing systems of their own (Vlăsceanu et al. 2011, 25). As a result, universities failed to consider internal quality assurance (IQA) as a managerial instrument meant to enhance the quality of education (Păunescu et al. 2011, 30–31); instead, they viewed it as an auxiliary bureaucratic procedure mentioned in the organizational chart but separated from the daily activities of teaching and learning in the university. In the absence of a “local culture of quality” (Vlăsceanu et al. 2011, 26), IQA was just another ‘empty-shell’ institution imported into the Romanian higher education landscape, which came to be implemented without substantive effects.

But if QA were to have ‘substantive effects’ at the level of universities, how would they look like? The next section reflects on this issue from a theoretical perspective.

3 Conceptual Notes on ‘Quality Assurance’

From an analytical point of view, we suggest that the problem with the lack of substantive effects in the implementation of quality assurance stems from the fact that universities only focus on ‘compliance’ with the rules imposed by QA policy, without identifying with or believing in the underlying ideas behind it. More specifically, academics in Romanian universities—for whatever reasons—do not internalize the various policies and norms entailed in QA. Indeed, if all Romanian academics would believe in the necessity of QA-related evaluation practices, we would probably not be discussing this particular policy problem.Footnote 2 The present section expands on the issue of ‘internalization’ by explaining our understanding of ‘quality assurance’ and the normative connotations behind it.

What we mean with the concept of quality assurance is a variety of techniques tasked with the evaluation of higher education and research with the purpose of improving its quality. These practices have in common that they place a normative appeal on ‘continuous self-improvement’ and ‘stakeholder communication’, embedded in procedures that are subject to inspection by peers and/or professional evaluators. The concept thus includes, among others, institutional evaluations, the accreditation of study programs, or even league tables made by governmental bodies. However, it probably does not cover managerial attitudes with a different normative appeal (such as loyalty to superiors, or cut-throat competition with peers) or evaluations of specific professional ‘products’ rather than the professional as such (e.g. peer review in academic publishing). Nevertheless, it is perhaps not so easy to draw clear boundaries around the technical and normative aspects of QA. The concept has been controversial as to how it can best be adapted to higher education, culminating in a variety of different approaches and terms. We thus see a mushrooming of words like ‘audits’, ‘evaluations’, ‘reviews’ and ‘accreditations’ and a myriad of acronyms like ‘ESG’, ‘ISO’, ‘EFQM’, ‘PDCA-cycles’ or ‘TQM’Footnote 3—each denoting different techniques of ‘doing QA’ as well as different people involved in this practice.

While many debate the differences between these instruments, we think it is important to analyze the shared ways of thinking behind them, their common procedures and the interaction between their various forms. In this sense, we aim to analyze the phenomenon that has sometimes been referred to as an ‘audit culture’ (Shore and Wright 1999; Strathern 2000) or even an ‘audit society’ (Power 1997). Analyzing the shared way of thinking behind these instruments is important, because some common conceptual distinctions may not be as easy as they seem. The distinction between ‘Internal Quality Assurance’ and ‘External Quality Assurance’ is one chief example. The literature often makes this distinction, whereby:

Internal quality assurance refers to those policies and practices whereby academic institutions themselves monitor and improve the quality of their education provision, while external quality assurance refers to supra-institutional policies and practices whereby the quality of higher education institutions and programs is assured (Dill and Beerkens 2010, 4).

This distinction is relevant because one of the key reference documents, the ‘European Standards and Guidelines on Quality Assurance’, places the main responsibility for QA on the shoulders of ‘higher education institutions’ (ENQA 2005). Indeed, the main policy documents in Romanian higher education make the same distinction.Footnote 4 But is it so easy to separate the ‘internal’ from the ‘external’? The professional scholar, student or departmental coordinator can consider both types of quality assurance as ‘external’. Inspectors with a mandate from the ‘state’ or from the ‘rector’ may be equally insensitive to departmental standards and practices. More importantly perhaps, both ‘external’ and ‘internal’ QA are the object of public policy. Indeed, it is the purpose of much ‘external’ QA to analyze the functioning of the ‘internal’ QA system. In other words, it is important to question whether ‘internal’ and ‘external’ can be disentangled so easily.

A second—and related—conceptual distinction is often made between quality assurance for ‘accountability’ and for ‘improvement’, respectively. While the former notion emphasizes the control aspect of QA, the second emphasizes the reflexive aspect (Bovens 2010). While this may seem a useful conceptual line, the border is also hard to draw in practice. Even the hardest forms of control are often justified through the language of improvement (Shore and Wright 1999). Therefore, the relevant question to ask is: ‘accountability’ and ‘improvement’ for whom? A specific change in teaching and learning methodology may be considered as an improvement by the government, and at the same time as regress by professionals, or vice versa.

In sum, then, the theoretical discussion on quality assurance requires us to unpack distinctions and analyze what they mean for those involved in its various practices. The following section will continue this discussion and propose various reasons why Romanian universities do not internalize quality assurance.

4 Hypotheses on the Failure to Internalize Quality Assurance

We present five possible hypotheses as to why quality assurance is not internalized in Romanian higher education. The hypotheses are derived from public debates on higher education as well as from public policy frameworks and political science literature applicable to higher education. They are best understood as complementary to each other, even if there may be some apparent contradictions between them. The following table gives a schematic overview over our hypotheses. Although we probably cannot disprove any of them, we believe that the likelihood of each hypothesis can be reduced if we do not find any empirical evidence to support it. Each hypothesis is discussed in more detail below with reference to what type of empirical material we expect to find (Table 1).

Table 1 A schematic overview over the hypotheses regarding the internalization of quality assurance in Romanian universities

4.1 The Problem of Academic ‘Complacency’

The most straightforward explanation why quality assurance is not internalized is because actors in universities do not see its purpose, since they are content with what they are doing in terms of quality. The reasoning behind this argument exhibits a form of academic ‘complacency’: people believe that they are good at what they do, and as a result they do not think they need quality assurance (whether external or internal). For instance, ARACIS considers that one of the main weaknesses of QA in Romania is that “higher education institutions still remain too ‘self-laudatory’ instead of showing an understanding of the role of self-criticism concepts for QA and the quality enhancement activities” (ARACIS Self-Evaluation Report 2013, p. 46).

Hypothesis 1: Quality assurance is not internalized because of academic ‘complacency’.

If this hypothesis holds empirical value, we should find that people in universities are often self-praising about their activities while seldom reflecting critically about themselves, their colleagues, or their university. Acknowledging weaknesses is perceived as wrong or even ‘unethical’, especially if it relates to the activities of others.

4.2 Top-Down Policy Failure

If the problem does not originate from complacency, then the failure to internalize quality assurance might originate from the policies themselves. Top-down approaches in implementation studies view the policy process as a linear model wherein policy-makers specify straightforward policy objectives which are then put into practice at lower levels (Palumbo and Calista 1990). The underlying assumption is that actors at the top can control what happens in the implementation chain (Elmore 1978; Mazmanian and Sabatier 1989). By implication, policy failure can occur when central-level guidelines are not clear and consistent enough for implementers to follow (Van Meter and Van Horn 1975).

Hypothesis 2: Quality assurance is not internalized because of ambiguous and inconsistent national regulations.

If this hypothesis is correct, we expect to find that people in universities regard national frameworks on quality assurance as overregulated, difficult to disentangle for the purposes of implementation and changing too fast for them to have the necessary time to adjust.

4.3 Bottom-up Policy Failure

A different, yet complementary perspective comes from the bottom-up approach in implementation studies, which argues that policy results are ultimately dependent on target populations and local deliverers (Berman 1978; Lipsky 1980; Matland 1995, 148–150). Accordingly, the success of a policy does not lie at the macro level with the framing of legal requirements (which of course provide certain structures of incentives), but at the micro level—where implementing actors need to be asked about their problems, goals and activities in order to identify relevant policies and ways to implement them (Hjern et al. 1978).

Hypothesis 3: Quality assurance is not internalized because it lacks support from people ‘on the ground’.

If this hypothesis is accurate, then we should find discordance between the narratives at central level and those of actors inside universities. Importantly, we should encounter actors in institutions who (at the very least) express skepticism about the content and necessity of QA-related evaluation practices, suggesting that national policies have little legitimacy or relevance on the ground.

4.4 Problems in Overcoming ‘Legacies from the Past’

A prominent narrative in the transition literature in political science is that of ‘communist legacies’, which generally prevent people from adapting to new approaches and mindsets (Kopstein 2003). In its more popularized form, this ‘legacy’ is a sort of vicious circle, with people distrusting each other, while the state is not able or willing to engage with new institutional forms. In its more serious form, ‘legacy’ is taken as a sociological type of institutional ‘path-dependence’ (Thelen and Steinmo 1992; Mahoney 2000) which deems the policy process as incremental and overall resistant to change (Hall and Taylor 1996, 941). Bruszt and Stark (1998), for instance, emphasize that the post-communist transition in Eastern Europe consists of institutional innovations, although these are both enabled and constrained by earlier political choices. In this sense, failure to absorb the new institutional set-up is a function of both past failures and faulty design.

Hypothesis 4: Quality assurance is not internalized because of institutional (communist) legacies from the past.

If such ‘path-dependence’ exists, we should find dominant institutional forms from the past that continue to influence actors today. In particular, we should find that academics refer to either formal or informal institutions with a long history that are still prevalent in the university. We should find that these institutions and historical practices stand in competition or simply overshadow the implementation of QA.

4.5 Logic of the Market for Higher Education

In contrast to theoretical frameworks focused on path-dependence stands a well-known theory that emphasizes the logic of the market. Not only did post-socialist countries democratize, but some also imported a specific type of capitalism, namely neo-liberalism (Bohle and Greskovits 2012). Already in earlier discussions, some scholars highlighted that new market arrangements could erase both old structures and attempts at new institutional forms (Burawoy 2001). Although the market is also a typical institution that is subject to path-dependence, we think it warrants a separate hypothesis. The difference with ‘path-dependence’ is that the market is not so much a ‘past-dependence’, as a ‘future-dependence’ which influences operations in the present based on the actors’ cost-benefit analysis (ibid).

Consequently, the market may both inhibit and encourage quality assurance practices depending on the individual preferences of actors (i.e. higher education institutions, students, professors, employers, etc.). On the one hand, the market may value less traditional academic standards of quality, while rewarding only the qualifications of graduates, which can lead to the cheap milling of diplomas. On the other hand, since the concept of quality management was pioneered in industry, the market may encourage a constant concern with quality assurance. Since we are concerned with answering why QA is not internalized, we will only discuss the former interpretation of the argument.

Hypothesis 5: Quality assurance is not internalized because the market does not reward its operation.

If this hypothesis holds empirical value, we expect to find that members of the university community do not perceive the market to reward quality assuranceFootnote 5. Moreover, the internalization of QA should be perceived by these same actors as ‘not worth the time and money’. Instead, their perception would be that the market rewards other type of activities, like popular study programs with little substance.

Having outlined the possible explanations for the failure to internalize QA in Romanian universities, the next sections move to presenting the data and the main findings. Before that, some elements of research design are introduced.

5 Research Design

From a methodological standpoint, our research follows in the tradition of interpretive policy analysis, exploring both discourses and the effects of ideas on practices (Fischer and Forester 1993; Finlayson et al. 2004). Within this framework, the purpose was to understand how actors in universities engage with quality assurance in terms of activities, effects and meanings associated with it (Milliken 1999). To this end, we examined three dimensions: (a) what is being done at the university/faculty level under the heading ‘Quality Assurance’; (b) what these activities lead to, and (c) how actors relate to this process.

In order to investigate how people “make sense of their lived experiences” (Yanow 2007, 410) with quality assurance, we used two primary methods - namely interviews and document analysis. Five field visits were carried out between December 2012 and May 2013 to a representative sampleFootnote 6 of universities: the West University of Timisoara (UVT), the Babes Bolyai University in Cluj-Napoca (UBB), the Gheorghe Asachi Technical University (TUI) in Iasi, the Romanian American University (RAU) in Bucharest and the Lucian Blaga University (LBU) in Sibiu. During the visits (which followed a standard template), we conducted 187 semi-structured interviews with a cross-section of the university population - including management in rectorates and faculties, QA commissions and departments, individual professors, and students (327 people in total). All interviews were transcribed, allowing for a structured analysis of transcripts. We then constructed a database on quality assurance in Romania, consisting of national-level policy documents in conjunction with documents originating from universities (institutional reports on QA) and the interview data. Each document was analyzed with a similar coding procedure used in the computer program ‘Atlas.TI’.Footnote 7 Inter-code reliability was ensured through a shared list of codes and mutual evaluations of coding practices. The coded material was later examined in light of the alternative hypotheses proposed. The findings are presented in the next section.

6 Findings

Table 2 below presents a concise summary of our findings. As shown in the table, hypotheses 2 and 3 were confirmed by the evidence gathered, while we did not find any support for hypotheses 1, 4 and 5.

Table 2 An overview over the hypotheses proposed and their empirical validation

The empirical evidence thus goes against some of the dominant explanations for why universities fail to internalize quality assurance. Although we cannot definitely reject hypotheses 1, 4 and 5, we have not found enough empirical material to support them. In fact, when it comes to hypothesis 1, we often encountered the opposite situation: rather than being complacent, most interviewees manifested insecurity about their professional status and awareness of the gaps in their ‘scientific’ work, coupled with an evident desire for self-improvement. In a similar vein - in relation to hypotheses 4 and 5 - we discovered that quality assurance is not directly inhibited by ‘communist legacies’, but instead seems to be encouraged by market mechanisms. Indeed, the university that was most dependent on the market (the private one) had strongly internalized the improvement values associated with quality assurance.

Clearly, there is no single mechanism at play that prevents QA from being internalized in Romanian universities. The two hypotheses confirmed by empirical evidence are thus complementary rather than mutually exclusive—as such, we will aim to construct a narrative in which hypotheses 2 and 3 provide a ‘full story’ of the reasons why QA is not internalized. The heart of the problem, according to our data, lies in the nature of policy-making in Romanian higher education—which fails in both its top-down and bottom-up dimensions. On the one hand, policy failure originates from unclear and inconsistent legal provisions that only result in a bewildering array of evaluation procedures and administrative structures considered burdening by actors in universities. On the other hand, policy failure derives from the exclusion of lower-level actors from decision-making on QA policy; as a result, these actors feel no ownership over their IQA systems. On the contrary, they regard quality assurance as a tool of the government, imposed from above, with the purpose to control universities through various reporting mechanisms. As a form of passive dissent, they comply with QA requirements in a ritualistic manner - which is why the process fails to produce substantive quality enhancements.

Based on an initial analysis of empirical evidence for each hypothesis in turn, we constructed a narrativeFootnote 8 as to why Romanian universities fail to internalize QA. In line with hypotheses 2 and 3, we present the ‘story’ below.

6.1 Top-Down Problems

The account starts at the macro level, with the design of national policies on quality assurance and higher education more broadly. Probably the most serious problem of the Romanian higher education system, as resulting from our data, comes from the unstable policy environment—higher education policies change very frequently, and so do procedures to ensure quality. Universities do not have a consistent set of rules to follow on QA and other activities in general, which creates confusion (since it is difficult to keep up-to-date with the latest legislative modifications) and prevents them from engaging in long-term planning. While the law on quality assurance has remained more or less in place since 2005, there have been many subsequent legal changes following the 2011 law on education, the classification exercise and associated legislation related to the evaluation of research centers (UEFISCDI 2010). Each of these changes has led to a build-up of frustrations about quality assurance and its supposed remedies among many academics. As expressed by one professor:

Regulations are constantly changing and it is hard to follow up on them. Some of the regulations are not coherent. We are constantly on stand-by. This creates confusion and we cannot plan for the future. (Decision-Maker, Professor, Female, NS0302).

The back-and-forth with the national classification system, whose legal status remains unclearFootnote 9, was an oft-cited example of policy instability affecting the implementation chain. Specific to internal quality assurance, recent legislation obliged universities to separate QA commissions operating under the rector from curriculum and quality commissions at the Senate level, which was criticized by implementing actors as overlapping and counterproductive because they are sometimes doing the same thing (Decision-Maker,Footnote 10 Professor, Male, AM0102).

Since neither the law nor the methodology specifies the boundaries of QA, people tend to understand it according to their own agenda. For instance, managers at faculty level would often link QA with the enforcement of sanctions on their employees. In the absence of flexible labor legislation, some university managers claimed that they would like to use staff evaluations for command-and-control purposes, e.g. to fire people (Decision-Maker, Professor, Male, AM0203). While QA may very well have the role to keep track of professors’ teaching and research activities, it can probably not substitute legal requirements on proper academic conduct. IQA may be the wrong tool to prevent violations of professional standards such as academic corruption, unmotivated absence from classes or defiance of basic student rights. This is where labor and even criminal law is supposed to come into effect. As one interviewee put it:

We have moved from quality evaluation to quality control - this does not mean quality improvement exactly (Decision-Maker, Professor, Male, AM0202).

In addition, the legal framework on QA is not straightforward to implement. For example, the ARACIS methodology emphasizes the production of documents outlining procedures rather than substantive performance indicators on teaching and learning. As one interviewee noticed:

Many of the things discussed on QA at ARACIS or the university level are empty of any content. For example, there is little in the way of ARACIS criteria that checks if teaching is suitable and relevant for the departments concerned. There is also little in the way of checking what actually happens in the classroom. It is important to check facts, not paper reports (Decision-Maker, Professor, Female, RS0802).

Although the legislation aims for the enhancement of quality by reference to numerous ‘standards’ and ‘procedures’, it is far from clear what they are supposed to achieve in terms of teaching and learning outcomes. Moreover, since universities rarely have QA-trained people to understand and apply the technical language from the national level, it is hard for them to identify with QA activities.

Another macro-level problem refers to the requirement to establish several administrative structures layered on top of each other. Typically, an institution would have at university level a QA department (DMC) and a QA commission (University-CEAC)—both operating under the supervision of a vice-rector responsible for quality management. These structures are complemented by a department on scientific research (under the supervision of another vice-rector) and a Senate commission on curriculum and quality. At faculty level, there are quality assurance and evaluation commissions (Faculty-CEAC), usually headed by the dean or a vice-dean tasked with quality management. Within departments, specific people are sometimes appointed as QA responsible, but usually the tasks fall under the role of department heads. While all these structures are theoretically part of an integrated system, the relationship between them does not seem entirely clear to many interviewees. Usually, the Quality Assurance Department is the most active structure at university level, but the degree to which QA procedures are organized and followed up at faculty level is largely dependent on individual managerial initiatives.

Owing to such complex institutional structures, there is a tendency to multiply procedures that are not always needed. Does the Senate really have to be involved in evaluating programs before ARACIS visits? Do faculties and departments really have to operationalize the strategic plan each year, and produce a report on their activities? There is a lot of frustration about the level of bureaucratization involved in running the IQA system:

[We need] to stop working twice for the same thing. Why do I need to have a faculty report and a QA report? Are they not the same thing? Why do we need two different reports and formats? (Decision-Maker, Professor, Male, RS0503).

Time management needs to become better. We are wasting a lot of time on useless things” (Decision-Maker, Lecturer, Male, KG0705).

The QA process is characterized by huge quantities of bureaucratic requirements. We are lucky that the Vice-Dean for Quality Management takes care of these documents” (Decision-Maker, Professor, Male, RS0604).

So far, the narrative presented reflects top-down aspects of the problems to internalize quality assurance. But our data shows that even if the national regulations would have been perfectly clear and consistent, they may not have been applied on the ground. There are significant bottom-up elements to consider, and they are presented next.

6.2 Bottom-up Problems

Most significantly, our data suggests that members of the university community do not feel ownership over their IQA systems. Since there are direct links between external evaluations and the legal survival of universities, respondents seem to understand IQA as preparation for external inspection rather than internal reflection on teaching and learning:

The QA system was only created in response to the law and ARACIS requirements - there is no point to hide this fact (Decision-Maker, Associate Professor, Male, AM1201).

We are forced by all these different institutions, ARACIS, EUA, to do such evaluations (Decision-Maker, Professor, Male, AM0202).

This understanding highlights that IQA is implemented mainly to comply with the law and governmental regulations rather than to actually improve institutional quality. In this sense, QA is viewed as something imposed from the outside, through procedures meant to artificially create a ‘quality culture’. But since the focus is on reporting (externally), the IQA system is regarded as a tool of government designed to control universities by invoking the argument of accountability—which is perceived especially by university and faculty management as infringing upon university autonomy. Further in the implementation chain, there is no wonder that people react strategically:

We were even told from the university level: you do what you think is best, and don’t take the self-evaluation too seriously (Associate Professor, Male, AM0502).

Accordingly, people passively try to subvert this tool of government by carrying it out in a ritualistic fashion while hiding what they are really doing. Instead of open contestation, there is a sort of resignation and task avoidance, which is why QA cannot become internalized. For instance, most respondents believe that evaluation criteria are imposed from above by policy-makers with little experience in running a university:

The system is designed by bureaucrats who have never been in a university. Now this system meets the everyday reality of people who try to cope (Lecturer, Male, NS0902).

Universities need to be autonomous. (…) they need to be free to set their own path to excellence rather than being constrained by excessive regulation from the central level (Decision-Maker, Associate Professor, Female, RS0105).

Many problems are derived from here. One interviewee referred to the difficulty to comply with the recently-imposed research standards, given both the lack of resources (e.g. access to international databases) and expertise to conduct research at a European level. The unintended effect was that research quality probably decreased as a result:

[Research indicators] have asked us to become ‘writing machines’. Books are written like this (snaps fingers) without reflecting on what should be written (Administrator, Assistant Professor, Female, KG0905).

I take information from students diploma projects. I give them some research to do, and I maybe get some papers from the research. It is maybe not so good, but both the student and I gain from this. (Associate Professor, Male, KG0503)

Simultaneously, assessment procedures do not account for differences between disciplines and fields of research. For example, in technical fields manuals are in great demand because of the fast-changing nature of the disciplines; however, their production is not counted as research (Decision-Maker, Associate Professor, Male, AM1003).

Moreover, QA procedures are often perceived as disconnected from the actual problems and goals universities have:

QA is not related to the improvement of quality: there has never been a bottom-up debate on what it should entail (Postgraduate Student, Male, AM0701).

For the average academic, QA has little utility in generating any type of change unless there is a personal desire for self-improvement. Without the connection between QA procedures and quality improvement, many academics see the QA process as purposeless and only taking important time from their teaching and research activities:

I was tormented years in a row by all this paperwork [for ARACIS evaluations]; when should you have time for research when you have all these additional tasks? (Lecturer, Male, AM1301).

In the language of the bottom-up implementation literature, this discussion can be summarized by claiming that local implementers (individual academics) do not see IQA as responding to their institutional needs and goals, their understandings of quality and how this should be achieved. Although there are individual exceptions, IQA thus fails to produce the quality improvements stated as objective.

7 Conclusion and Discussion

Despite being wrapped in a technical, enhancement-driven discourse, the discussion on the internalization of quality assurance is in fact as multifaceted as it is politically sensitive. This paper has demonstrated that there is no straightforward way to understand why actors in universities fail to routinize QA practices in their activities and subsequently use them to generate quality improvements, since the mechanisms at play are manifold. The analysis of the Romanian case has shown that problems revolve around the process of policy-making, with underlying causes at both the macro level (top-down failure) and the micro level (bottom-up failure). Indeed, the inconsistency and ambiguity of national regulations—not linked to teaching and learning in any substantive way - determine actors in universities to feel burdened by QA and confused as to how they should implement and make use of its activities. Moreover, there is some discordance between central-level narratives focused on quality enhancement and accountability and those of actors in universities, who generally feel no ownership over their IQA systems and fail to see the purpose of the multiple evaluation procedures. Therefore, actors on the ground reject QA practices as unnecessary and infringing upon university autonomy, which is why they subsequently perform them in a superficial manner—as a form of passive dissent. In the end, there can be no talk of improving QA processes in the Romanian higher education system without direct involvement and support from the people for whom they are effectively designed.

In light of the conceptual clarifications presented earlier in this paper, our findings may appear less surprising. Undeniably, it is difficult to separate the ‘internal’ from the ‘external’ when it comes to quality assurance. Academics are inclined to perceive all evaluations as ‘external’, regardless if they are conducted by governmental agencies/international bodies or their own institutions. At the same time, professional evaluators may miss the specificities of individual departments and disciplines, touching upon the sensitivities of local actors who thus become less open to move beyond ritualistic compliance with QA requirements. The issue hence returns to the second conceptual element mentioned, namely for whom is QA supposed to produce ‘accountability’ and ‘improvement’? For individual academics in the universities that we visited, the answer is ‘not for us’. On the contrary, the government is seen as the main beneficiary of all evaluation procedures, followed perhaps by the university management to a lesser extent. As long as they don’t see it in their best interest, actors in universities have no motivation to internalize quality assurance—which as a result fails to deliver on the promised quality enhancement objective.