Performance Management

  • Jan van HeldenEmail author
  • Christoph ReichardEmail author
Living reference work entry
DOI: https://doi.org/10.1007/978-3-319-31816-5_2315-1

Keywords

Public Sector Performance Indicator Performance Management Organizational Performance Swimming Pool 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Synonyms

Definitions

Performance is a complex concept (Van Dooren et al. 2010, pp. 16–20; van Helden and Reichard 2013, p. 11): it comprises several measurable aspects concerning the functioning of an organization, and particularly what this organization achieves. In the public sector, performance is related to public policies and aims at realizing societally relevant goals. Performance measurement is defined as the measurement of performance indicators considered as relevant and useful by decision-makers in public sector organizations, for a broad variety of purposes, including the reporting of these indicators. Performance management is defined as the way in which the information of measured performance indicators is used for those purposes, such as planning and control, accountability, and learning. The targeting of performance indicators and the analysis of variances between targeted and realized values of performance indicators are the main elements of performance management.

Introduction

Performance management has always been important to some extent in the public sector, because rather than financial results, the accomplishment of societally relevant goals is what counts. And performance information about those goals can be helpful in monitoring whether underlying actions are appropriate. However, since public administration came under the influence of new public management (NPM) over the last three decades, public sector organizations moved toward result orientation, and, therefore, performance management became increasingly important (Hood 1995). Politicians are now interested to show to their voters the impacts of their policy making, for example, whether promised highway connections were realized and related traffic flows improved. Public sector managers now feel more accountable for the services they deliver, including the societal effects of these services, e.g., the number of sold theater tickets and the cultural value of the theater events.

This entry introduces and illustrates the main concepts of public sector performance management, including the way in which performance is measured with performance indicators and how performance information can be used by public sector politicians and managers.

The remainder of this entry proceeds as follows. The next section introduces the basic logics and mechanisms of public sector performance management and it identifies the types of performance indicators of a public sector organization. The following section discusses the lifecycle of a performance management system, which comprises the stages design, implementation, operation and use, followed by some applications of performance management in public sector organizations. The final section provides some reflections and conclusions. Various aspects of performance management will be illustrated by examples from a variety of public sector organizations and – in separate text blocks –from the city of Groningen, the Netherlands.

The Basic Logics and Mechanisms of Public Sector Performance Management

A performance management system (PMS) can be established for different purposes. At first there are various internal purposes within a public sector organization (PSO), e.g., strategic and operative planning, steering, monitoring, controlling, and learning. Secondly, a PMS can be used for rendering accountability to external stakeholders and for providing data for external (policy) evaluation. Performance management (PM) in public administration is the result of a shift of emphasis from steering with inputs (e.g., via budgets) and based on formal, bureaucratic procedures toward a much broader spectrum of performance covering inputs and activities as well as outputs and outcomes. Politicians and managers are expected to focus their steering on goals and on politically intended outcomes. This comes along with some cultural change – from a bureaucratic to a managerial culture. PM is central to this new logic: it provides the necessary data to plan, steer, and control a PSO for achieving a certain performance.

As every organization, a PSO can be seen as a system in which inputs (resources) are transformed with throughputs (activities) into outputs (services) that ultimately generate outcomes (effects). Performance indicators (PIs) can be identified for inputs, throughputs, outputs, and outcomes. Particularly for measuring outcomes, several PIs will be needed to provide a realistic picture of the different features and dimensions of an outcome.

Additionally, this conceptualization enables an assessment of the organization’s efficiency and effectiveness, as displayed in Fig. 1. Efficiency relates the inputs to the outputs and establishes whether the outputs are produced at a sufficiently low cost level. Effectiveness links the outcomes to the outputs, enabling the organization to find out whether the outputs have led to the desired effects (Johnsen 2005, p. 11; Jackson 2011, p. 16; Van Dooren et al. 2010, p. 18). Other aspects of public performance are responsiveness to citizens and legitimacy toward various stakeholders, although these aspects are not linked to the model in Fig. 1.
Fig. 1

The transformation process of a public sector organization

The organization often defines objectives with regard to its transformation process. An objective is translated into measurable aspects through performance indicators (PIs), and subsequently targeted (ex ante) and realized (ex post) figures about these PIs are compared, both for accountability reasons and for considering corrective actions. An example can illustrate this.

Suppose a municipality observes that children living in low-income families on average show a low participation rate in sports and cultural events. It wishes to enhance the opportunities of this target group for taking part in these types of events. This objective is informed by an investigation which indicates that participation of children in societal activities increases the likelihood of a better school performance and perspectives on getting a job in the future. Suppose further that the target group of children between 6 and 15 years in low-income families is 20,000. Currently, only 15 % of this target group are active in the domain of sports and cultural events, and the municipality wishes to raise this share to at least 60 %. Participation in sports and cultural events has to be made measurable through PIs. Possibly relevant PIs are the number of children attending sports or cultural events on a regular basis of at least 30 times a year (PI sports and PI culture). The municipality further decides that taking part in either sports or cultural events is sufficient. In consultation with sports clubs and cultural organizations (museums, libraries, theater schools, etc.), the supply of relevant services is increased, for which the municipality provides subsidies amounting to 400,000 euros on a yearly basis. After these additional opportunities are made available supported by a promotion campaign, the municipality measures the actual figures of the PIs, which are PI sports = 9500 and PI cultural = 1500, so in total 11,000 children participate which is close to but below the target value of 60 % of 20,000 = 12,000. Now, corrective actions can be considered, ranging from intensifying promotion campaigns to finding more attractive sports and cultural events, or ultimately accepting that a 55 % participation rate is not so bad. This example shows that translating a public sector goal into appropriate PIs and related targets is a delicate process, in which diverging choices have to be made: what is meant with participation in the society by children?; what is an acceptable rate of participation?; how much resources are needed for a supply that can achieve a certain target participation rate, etc.? In addition the example shows that diverging types of PIs are relevant, ranging from resources (input) to cultural and sports supplies (throughputs) and from participation rates (outputs) to long-term effects on school performance and job chances (outcomes), although the latter are not explicitly measured.

Table 1 gives two further illustrations of performance indicators for different types of services. The first illustration about a municipal swimming pool is quite straightforward as it regards a process with standardized activities leading to clearly identifiable outputs and effects. The second illustration about a reintegration program for unemployed people concerns a relatively complex governmental service, which is characterized by heterogeneity of activities and ambiguous outputs and outcomes. Performance measurement is evidently more difficult in the second than in the first illustration. This can be explained further as follows.
Table 1

Different types of performance indicators with two illustrations (numbers or amounts of euros are defined per year)

Types of performance indicators

Definition

First illustration: municipal swimming pool

Second illustration: reintegration of unemployed persons on the labor market

Inputs

Resources in money or physical terms

250,000 euros

1.4 million euros

Throughputs

Activities

800 h operation of swimming pool per year

Different types of activities, including training sessions and internships

Outputs

Offered services

Provision of swimming facilities to 35,000 paying customers

180 participants

Outcomes

Effects

28,000 paying customers with a higher satisfaction rate than 7.0 (on ten-point scale)

45 participants successfully entering the labor market

Efficiency

Cost per unit of service

250,000/35,000 = 7.10 euros per ticket

1.4 million/180 = 7.777 euros per participant

Effectiveness

Effect per unit of service

28,000/35,000 = 80 % are satisfied given 7.0 target

45/180 = 25 % are successful

Activities of a swimming pool, the first illustration in Table 1, are standardized; during opening hours tickets have to be sold, the swimming pool has to be cleaned, and there is some need for keeping an eye on the swimmers. So an opening hour is an adequate proxy for swimming pool activities. The output is the number of paying customers. The outcome of a swimming pool may be its contribution to a healthier lifestyle, but this connection is not easily identifiable. A more pragmatic outcome can be the extent to which customers show their satisfaction in a questionnaire about the swimming pool. Efficiency is then a relation measuring the resources over the output, which turns out to be 7.10 euros per paying customer; this figure can be compared with the tariff – for example, 5 euros – indicating that the difference between the cost and tariff level needs subsidizing from the municipality. The cost and tariff level can also be compared with previous years and with those of neighboring municipalities, in order to assess the relative attractiveness of this service. The percentage of customers with a sufficiently high satisfaction score in relation to all customers indicates the effectiveness of this municipal service.

Reintegration of unemployed persons on the labor market, the second illustration in Table 1, can comprise a set of quite diverging activities, including consultation sessions with employment specialists, contacts to local employers, training sessions with various contents (e.g., presentation in job appraisal interviews and improving job-related skills), as well as internships with potential employers. The output can be defined as the number of participants trained and advised in the program. The outcome is the success rate of the program, i.e., how many participants actually get a job within a certain period of time. Efficiency is here the costs per participant and effectiveness the percentage of participants getting a job after a certain period. This effectiveness measure is rather ambitious, because many other factors are influential for jobless persons in getting a job, such as general labor market conditions and the already available job-related skills of participants.

The transformation system, as shown in Fig. 1 and illustrated in Table 1, enables to typify the specific character of the public sector. While private sector organizations’ ultimate outcome is financial success, mostly profitability, in the public sector, financials are mostly not goals but constraints (e.g., keeping within budgetary limits or breaking even). In addition, PSOs have to serve a clearly defined public interest, i.e., they need to accomplish societally relevant goals. That is why outcomes are the final and most important link in the chain of the transformation process in the public sector (van Helden and Reichard 2016). However, defining outcomes, and thus effectiveness, is the most complex part of the PMS. In general, the assertion can be made that the more the outcome indicator is related to societally relevant goals and to public policies, the more difficult it is to make the connection between this outcome indicator and the underlying set of activities and outputs. This is due to the influence of other factors on the outcome, which is beyond the controllability of the PSO. Further, outcomes are often difficult to measure as they cannot, or only through certain proxies, be quantified (e.g., a healthier lifestyle, a stronger cultural involvement). That is why outcomes are often defined quite pragmatically and closely connected to the transformation process of the public sector organization, especially through quality of service indicators. However, a trade-off exists between relevance and simplicity of outcome indicators; the more relevant an outcome is, the more difficult it is to measure it adequately, and easy-to-measure outcome indicators are often only crude proxies of what really matters.

One of the types of indicators derived from the transformation process is the quantity of service units, i.e., the output. In a public sector context, it is often desirable to indicate to what extent a potential target group is covered by a certain service. For example, how many poor citizens, as defined in a specific way, receive low-income support from a municipality as a percentage of the total number of poor citizens? Or what is the relative number of drug addicts in a certain region taking part in a governmental rehabilitation program? This relative reach of a target group is called equity.

A distinction is sometimes made between a single and a composite performance indicator. A single indicator measures one aspect of a transformation process, whereas a composite indicator includes several of those aspects. An illustration: if equity, efficiency, and effectiveness are all measured by the same scale, ranging from 1 (poor) to 10 (excellent), a composite performance indicator can be the average of the three scores. So, if equity, efficiency, and effectiveness have scores of 6, 8, and 5, respectively, the composite score is 6.3 (= (6 + 8 + 5)/3)). Composite indicators can be attractive if stakeholders want to get a comprehensive and concise impression of the performance of a public sector organization. However, by averaging scores of different indicators, simplicity is served but some possibly important information is lost.

Illustration: Performance Indicators for Delivering Social Benefits in Groningen City

The activities of one of the divisions of the municipal organization in Groningen, the Netherlands (about 200,000 inhabitants), regard income assistance to unemployed people, i.e., who do not have work-related income. The main set of activities is concerned with providing social benefits, which is a mandatory municipal task constrained by central government regulation, but the division is also responsible for other tasks, especially providing specific benefits, delivering financial support for social activities, and giving assistance for relieving financial debts. One specific department within the division is responsible for delivering social benefits.

In addition to indicators concerning the level of receivables and the recovery of unduly granted benefits, this department uses the following two key performance indicators:
  1. 1.

    The period within which an application for a social benefit has to be decided: the target level is 8 weeks.

     
  2. 2.

    The cost level of activities relating to decision-making about an application for a social benefit: the target for this cost level is that each employee can handle one application in about one working day.

     

The first indicator and its target level are determined by central government regulation. The second indicator and its target level are based on a caseload analysis of these activities; the average amount of applications per week is about 100, and the number of employees responsible for interviewing applicants, checking, among other things, their financial background, and preparing a decision about an application is 19; this workforce is sufficient given the workload.

The first indicator is an example of a quality indicator: applicants know in advance that they get a decision on their application within a reasonable time. The second indicator is an efficiency indicator. Efficiency is not measured as the total costs in euros over the total output (applications), but as the required number of employees handling applications over the total output in a certain period. The workforce is thus seen as the main cost factor.

On the longer term, particularly within a period of 2–3 years, Groningen wishes to accomplish a significant cost saving in its process of delivering various types of social benefits. A distinction is made between a front office (conducting contacts with clients and handling the specific elements of their application) and a back office (responsible for the processing of the applications, including decisions about the handling of payments). A concentration of the back office of 13 cities in the Netherlands at one location with strong IT support is intended, while the front office remains with these cities separately. This will lead to a lower-cost level of the second performance indicator.

The Life Cycle of Public Sector Performance Management

Performance management (PM) in the public sector often evolves according to a certain life cycle, which comprises four stages (Fig. 2): design, implementation, operation, and use of a PMS (van Helden et al. 2012; van Helden and Reichard 2013). A PMS is embedded into its organizational and task-specific context and is expected to provide certain performance information which is used for various purposes. At first, a PMS has to be “constructed” according to the specific information needs it is expected to satisfy. Thereafter, the PMS needs to be incorporated into the PSO (“implemented”) and to be prepared for its operation. “Operation” is then the regular functioning of the PMS, i.e., the collection, generation, and supply of performance data for the specific purpose of the PMS. And “use” means the utilization of performance information by the various addressees of the PMS. The operation and use of a PMS finally result in certain effects, primarily in a (hopefully) improved performance of the organization. Design and use are the most important stages of the PMS: the first sets the course for the whole PMS; the latter is the major purpose of the whole exercise: develop performance information and use it for better decisions or for external accountability and control.
Fig. 2

The life cycle of performance management systems

Finally, a PMS has certain impacts on its organizational environment. Obviously, its main aim is whether it leads to better performance. This assessment/evaluation of the PMS is concerned with learning how a PMS can be updated and revised due to experiences. Additionally, it is relevant to assess whether a PMS contributes to a better functioning of the public sector organization, in terms of, for example, efficiency and quality of service delivery. Fig. 2 shows the four stages of the PMS and also how it connects with its environment.

Below, the stages of the PMS life cycle are elaborated further.

PMS Design

In the design stage, the general structure and mechanisms of the PMS are determined, e.g., the extent to which performance information should be regularly collected. In this stage, the main questions about the content of the PMS are answered (Van Dooren et al. 2010, pp. 54–75):
  • Which PIs are to be selected, and what are the targets for the PIs?

  • What degree of detail of PIs should be achieved?

  • With which methods should certain inputs, outputs, and effects be measured?

  • In which rhythm should data be collected, analyzed, and reported?

  • Who are the main recipients of performance data and which type of data do they need?

  • How can the quality of the PMS regularly be assured?

Often the selection of PIs is driven by the goals or strategy of the organization. Design can also result in a multidimensional complex structure of a PMS, such as the balanced scorecard or quality measurement models like the concept of the European Foundation for Quality Management (EFQM) or the Common Assessment Framework (CAF).

Targeting, as part of the design stage, means that for each PI, the desirable value has to be established, for example, for a quality indicator a 7.5 on a 1–10 scale and for a cost indicator the amount of euros per customer (e.g., 8 euros per swimming pool visitor). Targets can have different sources. Benchmarking among similar suppliers can inform targets, particularly by indicating that a PSO does have to perform at least on an average level or on the level of the best 25 % of the benchmark. An alternative option is that targets, especially cost targets, are based on a technical analysis of the underlying processes. A pragmatic but quite crude form of targeting is requiring that the performance has to be improved with a certain percentage, for example, a 2 % improvement of performance each year.

PMS Implementation

Implementation is the stage of incorporating the PMS into a PSO (Moynihan 2008, pp. 75–94). The system has to be established in the units of an organization, IT applications have to be developed, and procedures and regulations have to be put into effect. Additional issues are piloting of performance indicators and training of employees aimed at understanding and using performance indicators.

PMS Operation

In this stage, the PMS will be set into operation (Van Dooren et al 2010, pp. 96–115). It supplies the expected performance data to the various users of such data inside and outside the respective PSO (e.g., via a periodical reporting concept).

PMS Use

What ultimately matters most is that the performance data provided by the system is used. The use can be intensive or extensive, functional or symbolic, rigid or flexible, all dependent upon certain contextual circumstances. The use of performance information can be seen as the demand side of a PMS which depends on the attractiveness and appropriateness of performance information supply (Brun and Siegel 2006) but also on various individual and contextual characteristics of the users. Who the users of performance information are largely depends on the specific purpose and function of a PMS. In many cases, managers of sector departments of a PSO will be the main users. Additionally, elected or appointed politicians may be a target group, e.g., aldermen or councilors of a municipality. Other external groups of recipients may also be relevant, e.g., supervisory authorities or courts of auditors.

The data provided by a PMS is used for different purposes. PSOs use performance information to a significant extent for external control and accountability, e.g., to inform oversight authorities, the legislation, media, or citizens (Moynihan and Hawes 2012). Additionally, performance information is frequently used for internal planning, steering, and control, particularly in organizations following an NPM-based reform philosophy (Taylor 2011). Furthermore, a PMS is used for benchmarking exercises, e.g., among different municipalities (Ammons and Rivenbark 2008).

The use can be different according to various dimensions. Here the following types of performance information use are discussed: intensity of use, functional versus symbolic use, rigid versus flexible use, and coercive versus enabling use.

Compared with private organization actors, PSOs are expected to use performance information less intensively. Empirical evidence, however, indicates that the differences between the two sectors are less explicit (van Helden and Reichard 2016). One actor group is of specific interest: politicians, e.g., members of parliament, local councilors, and aldermen. Here, the empirical picture is diverse: sometimes politicians are reluctant users (Grossi et al. 2016); in other cases they seem to be quite interested in such data (Askim 2007). These diverging results are not surprising as the intensity of performance information use is dependent on a series of influence factors, e.g., tasks, organizational structures, culture and tradition, individual features of the actors, etc.

Functional performance information use is identical to rational use, which implies that information is used for achieving organizational goals. An example of functional use is the search for corrective actions aiming at improving organizational performance when actual performances are lagging behind targeted performances. Symbolic use, on the contrary, means that rather than achieving better organizational performance, other goals are at stake, for instance, giving an impression of modernity to certain external stakeholders. Symbolic use thus does not lead to rational actions, but it can strengthen PSO’s legitimacy (Modell 2009).

Performance information is used in a rigid way when rules about the use are strict and consistently applied. For example, rules are developed that prescribe more than 10 % better performance than a certain target leads to a specific bonus, while a more than 10 % lower performance than this target induces a specific sanction. If these rules are applied without consideration of contextual circumstances – e.g., was it easy or difficult to achieve the target given the economic conditions? – then, performance information use is rigid. However, when the accomplishment of a bonus or sanction is part of a dialogue between an employee and his/her boss, in which the realized performances play a role but also information about the context in which these performances were realized, then performance information is used in a flexible way (ter Bogt 2004).

Mostly internal use of performance information aims to stimulate employees to act in such a way that organizational objectives are accomplished. This type of performance information use is embedded in a command and control device which can be denoted as a coercive use. However, performance information can also facilitate lower-level managers and employees to do their job in a better way. This is called an enabling performance information use (Wouters and Wilderom 2008).

A simple recipe for the most desirable type of performance information use is impossible, but some suggestions can be given. The intensity of use often depends on two factors: the regular frequency of reporting, sometimes prescribed by regulations, and whether the organization is confronted with certain problems – problems often induce a more intensive information use. In general, functional use is preferred above symbolic use, and functional performance information use benefits from involvement of decision-makers in the design of the PMS; the more it becomes their own information system, the more they will be inclined to use it according to its purposes. This also is a plea for cocreation, thus enabling performance information use. However, coercive use remains important. This type of use may focus on some key PIs for which minimum target levels apply, such as budgetary constraints and minimum service standards, while more detailed and process-related indicators can be used in an enabling way. This also relates to the distinction between flexible and rigid use. In general, flexible use is to be preferred, but a rigid use of key overall PIs can often not be avoided (Van Dooren and Van de Walle 2008).

Illustration: The Use of Performance Information in the Income Assistance Division in Groningen City

How is performance information used within the Income Assistance Division of Groningen city (see also the first illustration above)?

Two interrelated planning and control cycles are distinguished. The first one is the cycle of the division as a whole connected with the executive and more particularly the member of the executive for social affairs. The second cycle operates within the division among the manager in chief and his department heads.

The first cycle starts with a divisional annual budget, and in the course of the year, two or three interim reports are prepared. This budget and these reports contain both financial information, especially planned resources for the different types of divisional tasks, complemented with information about various types of performance indicators, mainly output and outcome indicators. A dashboard system is used in the interim reports for signaling actions as a consequence of performances seriously lagging behind targets (a yellow signal is then applied, while a green signal indicates unproblematic execution of activities).

The second cycle is characterized by a larger extent of detail and a higher reporting frequency. In this cycle two types of consultation are distinguished. One is a consultation every 2 weeks among the department managers and the staff members for finance and HRM. The other consultation takes place between the manager in chief and his department heads, both mostly in a monthly management team and bilaterally. In both types of consultation, monitoring of budget execution is the main issue. Both types of consultations operate as mutual checks and balances.

If during one of the above consultation meetings a substantial gap between planned and actual performances is registered – for example, a higher number of employees needed for handling a number of applications than according to the target – corrective actions are discussed. Such discussions start with a consultation between the manager in chief and the responsible department manager, but – if desired – it can be extended to team-wise analysis of the problem at hand.

Managing a division is not limited to performance management. The manager in chief stipulates the importance of other issues, especially, how IT applications can improve processes; which measures are appropriate for making processes “lean and mean”; and how employees can become better equipped for doing multiple tasks.

The above-provided illustration about Groningen city highlights several theoretical constructs about performance information use. Information about performance indicators is used in a flexible way; in case of negative variances between actual and planned performances, responsible managers are invited to explain possible reasons, and the organization is helpful in reflecting about corrective actions. In addition, managing by numbers is seen as important, but proper actions – making processes more efficient – are seen as more important than meeting the targets by “ticking the boxes.”

PMS Impact on Performance

Obviously, the ultimate proof of the functioning of a PMS is if the organizational performance is improving. The performance of a PSO is, however, depending on a number of variables: as explained, on the inputs and activities of the transformation processes but also on various task-specific and contextual variables. To isolate the influence of the functioning of an existing PMS, and particularly of the use of performance information, is not an easy thing. The research on this issue presents inconsistent findings: on the one side, several studies confirm that the implementation and operation of a PMS have a positive impact on performance (e.g., Boyne and Chen 2007; Meier and O’Toole 2009). This is particularly the case if performance information is actively used (Kroll 2015). Other studies either find no clear link between PI use and performance, or they even signal a negative effect on performance. Hood (2012) is particularly critical about possible distortions of outputs and outcomes if PM relies in a one-sided manner on quantitative targets and on rankings.

An illustration of the effects of performance information use on organizational performance can be derived from a study about German museums (Kroll 2015). A survey based on interviews with chief administration officers of public museums in Germany in 2011 showed the following evidence: the use of an existing PMS for managerial purposes alone does not have an impact on the performance of museums. A positive effect can only be observed if performance information is used in a situation where the museum follows a prospective strategy, i.e., where the museum aims to develop better services for its customers. Furthermore, the survey revealed that poorly performing museums are more tightly supervised by their oversight bodies than well-performing museums. Oversight bodies of poorly performing museums use the reported performance information more intensively than those of well-performing museums.

Application of Performance Management in PSOs

As discussed above performance information can be applied in a PSO, both for internal purposes (e.g., planning and control) and for external, accountability purposes. It becomes increasingly important in budgeting reforms as most of them tend to include performance information in budgets. Another application concerns contracts or service agreements drawn between a PSO and external providers or, within a PSO, among a provider and a recipient of a service. And obviously, performance information plays a dominant role in the evaluation of certain policy programs where intended and unintended effects of policy interventions are to be measured. Finally, performance information is also relevant in human resource management, particularly for the assessment of employees’ performance and for performance-related pay (if established).

A PMS is not a single “machinery” installed somewhere in a PSO to provide performance information wherever and whenever needed. Rather it is a management style which emphasizes the use of performance information with the ultimate aim to improve organizational performance and which results in the establishment of several PMSs in the abovementioned application fields. Public sector PM is therefore interrelated with several other management functions (see, e.g., Schedler and Proeller 2010):
  • With the various concepts and tools of financial management (simply because financial performance is part of overall performance)

  • With strategic planning and management

  • With operative management, e.g., steering and controlling of public service provision

  • With human resource management

  • With quality assurance and policy evaluation

Conclusions and Reflections

PM is in place in the public sector of many countries. Governments at all levels – but predominantly at local level – have implemented PMSs and operate them since several years. Thus, performance data is frequently available in PSOs, for decision-making as well as for external accountability. There seems to be some evidence that the provided data is used and is leading to some performance improvements (see above).

This development evolved in a rather long process. Following Bouckaert and Halligan (2008, pp. 69–127), at least three different stages of development can be distinguished (additionally the authors mention performance governance):
  • Performance administration (a rather bureaucratic procedure of data collection and reporting)

  • Managements of performances, which are disconnected from each other and do not follow a common logic

  • Performance management, which can be seen as a coherent and consistent integrated system for a whole PSO

PM is not only a practical concept; it also has become an issue in academic teaching programs and in professional trainings. Furthermore, since about 20 years, PM is an increasingly attractive research topic, in both the private and the public sector (van Helden and Reichard 2016). Research themes followed to some extent the life cycle of PM: at first, researchers dealt with the design and the implementation of a PMS, e.g., by studying how performance indicators are mirroring certain objectives; in more recent times, many researchers addressed the issue of performance information use (see Section “The Lifecycle of Public Sector Performance Management”). The effects of a PMS on organizational performance are still in a more initial stage of research; so far most of research concentrated on the functioning of the PMS itself.

The general advantage for a PSO of having an implemented PMS which also is in use is the continuous availability of performance information for the various purposes. Decisions can be based on quantitative figures deriving from measuring PIs related to the different stages of the transformation process of a PSO. This certainly is distinctive to previous times without such systematic data provision.

However, the potential benefits of performance measurement and management are restricted by some critical issues. At first we are often confronted with severe measurement problems in the public sector. Measuring outputs, and particularly outcomes, is in many cases almost impossible. Approximation procedures, e.g., by concentrating on a more easily measurable indicator, are often invalid. Furthermore, the attribution of certain effects to an output is in many cases problematic, as there are no clear cause-effect relationships. In addition, some effects occur with substantial delays in relation to the actions concerned. Various contextual and other external variables have influence on certain performance figures. If it is not possible to establish appropriate and fair measurement practices within a PMS, the whole system may get into trouble. This is often the case with concepts of performance-related pay in PSOs, where employees perceive the measurement methods and criteria as unfair and show decreasing job satisfaction rates.

The logic of PM may result in a fixation of decision-makers on measurable aspects of performance, in line with the motto “you can only manage what you can measure.” Such management attitude may have the unintended consequence that managers ignore nonmeasurable, albeit relevant issues (“what you cannot measure does not matter”). This may lead to one-sided or wrong decisions as well as to negative side effects. When employees are controlled in a very tight way on one or only a few indicators, they may be inclined to disregard unmeasured but still relevant other aspects of their work (de Bruijn 2007). Suppose, for example, that a PSO uses, as a key performance indicator for internal cleaning services, the cost of cleaning a square meter of office floor space and that the target level is 4.50 euros per square meter per month. Some guidelines are issued about the intensity of cleaning activities in various compartments of the office, but what ultimately counts most is remaining under the target level of 4.50 euros per square meter per month. If bonuses are connected to achieving this target, this cost PI attracts even more attention. However, in addition to an efficient use of resources, other aspects are of importance in providing cleaning services, such as achieving a certain quality level and cleaning intensity, dependent upon the various types of office space. If these service aspects are only described in general terms, whereas the cost target is quantitatively expressed in monetary terms, and also crucial in the control, the indicated service aspects will be ignored to some extent or will at least get less attention.

How can these negative side effects of a PMS be avoided? First, the PMS design should cover all important aspects of an organizational unit, preferably specified in similar, for example, quantitative, terms. Second, using performance information in a flexible rather than a tight way is desirable. This implies that information about differences between the targeted and actual performances are on the agenda of consultations between a manager and his/her employees, but these consultations also give room to providing explanations about these differences, including giving attention to unmeasured aspects of service delivery, like being helpful to colleagues, and responding to unexpected demands for particular services. Third, connecting severe control consequences to underperforming or overperforming, in the form of sanctions and rewards, respectively, should be avoided.

In a nutshell, PM has for sure several advantages: it increases transparency of processes and results of an organization, it provides important data for decision-making but also for rendering accountability, and it promotes learning and contributes to improved services. On the other side, PM may eventually lead to various perverse effects (Bruijn de 2007, pp. 17–33): PM may stimulate a tunnel vision (see the above illustration), and it tends to hinder innovation and increase bureaucracy, because it encourages managers and employees to focus on the PIs in the PMS, while disregarding other less measurable aspects of a proper functioning PSO. Evidently, there is a tension between the dangers and advantages of tight quantitative performance controls. Because control needs to be effective in the sense of stimulating desirable types of behavior, it requires that PIs are related to goals and give clear guidance for certain types of actions. But a too tight use of a limited number of PIs also brings about negative side effects. So a balanced design and use of a PMS is the best, but finding this balance is a delicate practice.

Cross-References

References

  1. Ammons DN, Rivenbark WC (2008) Factors influencing the use of performance data to improve municipal services: evidence from the North Carolina benchmarking project. Public Adm Rev 68(2):304–318CrossRefGoogle Scholar
  2. Askim J (2007) How do politicians use performance information? An analysis of the Norwegian local government experience. Int Rev Adm Sci 73(3):453–472CrossRefGoogle Scholar
  3. Bouckaert G, Halligan J (2008) Managing performance. International comparisons. Routledge, London/New YorkGoogle Scholar
  4. Boyne GA, Chen AA (2007) Performance targets and public service improvement. J Public Adm Res Theory 17(3):455–477CrossRefGoogle Scholar
  5. Brun ME, Siegel JP (2006) What does appropriate performance reporting for political decision makers require? Empirical evidence from Switzerland. Int J Product Perform Manag 55(6):480–497CrossRefGoogle Scholar
  6. de Bruijn H (2007) Managing performance in the public sector, 2nd edn. Routledge, London/New YorkGoogle Scholar
  7. Grossi G, Reichard C, Ruggiero P (2016) Appropriateness and use of performance information in the budgeting process: some experiences from German and Italian municipalities. Public Perfor Manag Rev 39:581–606CrossRefGoogle Scholar
  8. Hood C (1995) The ‘new public management’ in the 1980s: variations on a theme. AOS 20(2/3):93–109Google Scholar
  9. Hood C (2012) Public management by numbers as a performance-enhancing drug: two hypotheses. Public Adm Rev 72(S1):585–592CrossRefGoogle Scholar
  10. Jackson PM (2011) Governance by numbers: what have we learned over the past 30 years? Public Money Manag 31(1):13–26CrossRefGoogle Scholar
  11. Johnsen A (2005) What does 25 years of experience tell us about the state of performance measurement in public policy and management? Public Money Manag 25(1):9–17Google Scholar
  12. Kroll A (2015) Drivers of performance information use: systematic literature review and directions for future research. Public Perfor Manag Rev 38(3):459–486CrossRefGoogle Scholar
  13. Meier KJ, O’Toole LJ (2009) The proverbs of new public management: lessons from an evidence-based research agenda. Am Rev Public Adm 39(1):4–22CrossRefGoogle Scholar
  14. Modell S (2009) Institutional research on performance measurement and management in the public sector accounting literature: a review and assessment. FAM 25(3):277–303Google Scholar
  15. Moynihan DP (2008) The dynamics of performance management. Georgetown University Press, Washington, DCGoogle Scholar
  16. Moynihan DP, Hawes DP (2012) Responsiveness to reform values: the influence of the environment on performance information use. Public Adm Rev 72(S1):95–105CrossRefGoogle Scholar
  17. Schedler K, Proeller I (2010) Outcome-oriented public management. IAP, CharlotteGoogle Scholar
  18. Taylor J (2011) Factors influencing the use of performance information for decision making in Australian state agencies. Public Adm 89(4):1316–1334CrossRefGoogle Scholar
  19. ter Bogt HJ (2004) Politicians in search of performance information? – survey research on Dutch Aldermen’s use of performance information. FAM 20(3):221–252Google Scholar
  20. Van Dooren W, Bouckaert G, Halligan J (2010) Performance management in the public sector. Routledge, london/New YorkGoogle Scholar
  21. Van Dooren W, Van de Walle S (eds) (2008) Performance information in the public sector: how it is used. Palgrave, Houndmills/New YorkGoogle Scholar
  22. van Helden J, Reichard C (2016) Commonalities and differences in public and private sector performance management practices; a literature review. In: Epstein M,Verbeeten F, Widener S (eds) Performance management and management control: contemporary issues. Studies in managerial and financial accounting, vol 31. Emerald Publ., Bingley, pp 309–351Google Scholar
  23. van Helden J, Reichard C (2013) A meta-review of public sector performance management research. Tékhne Rev Appl Manag Stud 11:10–20Google Scholar
  24. van Helden J, Johnsen A, Vakkuri J (2012) The life-cycle approach of performance management research; implications for public management and evaluation. Evaluation 18(2):159–175CrossRefGoogle Scholar
  25. Wouters M, Wilderom C (2008) Developing performance-measurement systems as enabling formalization: a longitudinal field study of a logistics department. AOS 33(4):488–516Google Scholar

Useful Textbooks for Further Information About Public Performance Management

  1. Bouckaert G, Halligan J (2008) Managing performance. International comparisons. Routledge, London/New YorkGoogle Scholar
  2. de Bruijn H (2007) Managing performance in the public sector, 2nd edn. Routledge, London/New YorkGoogle Scholar
  3. Moynihan D (2008) The dynamics of performance management. Georgetown University Press, Washington, DCGoogle Scholar
  4. Van Dooren W, Bouckaert G, Halligan J (2010) Performance management in the public sector. Routledge, London/New YorkGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.University of GroningenGroningenThe Netherlands
  2. 2.University of PotsdamPotsdamGermany