Three Hundred Sixty-Degree Assessment

  • Chaitra M. HardisonEmail author
Living reference work entry
DOI: https://doi.org/10.1007/978-3-319-31816-5_2744-1

Keywords

Performance Appraisal Direct Supervisor Performance Appraisal System Multisource Feedback Developmental Purpose 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Synonyms

Introduction

Put simply, 360-degree feedback is employee performance feedback information collected from multiple sources, such as superiors, subordinates, peers, and customers. It serves as one way for employers to provide employees with a better understanding of how they are performing on the job. The basic premise behind 360-degree feedback is that by soliciting feedback from multiple sources an employee receives a more complete picture of their performance. This assumes that direct supervisors are not able to view all aspects of an employee’s performance and that others who regularly interact with the employee can offer unique and valuable insights. For supervisory, management, or leadership positions, where interpersonal interactions with peers and subordinates are a central component, the views from below, for example, can be particularly relevant.

Three hundred sixty-degree assessments, having grown in popularity over the last few decades, are now in use in many organizations in some form or another. Matching that rise in popularity, there is now no shortage of 360-degree assessment tools being marketed to organizations, but not all are worth the investment. Although there is no single appropriate form for a 360-degree assessment to take, researchers have identified a series of important design considerations when developing 360-degree assessments. Careful attention to those design features is vital for ensuring the feedback is sound and effective. The following sections describe several of those design features. They also provide greater detail on typical features of 360-degree assessments, discuss some of the controversy surrounding the use of 360-degree assessments, and explore why 360-degree assessments are not always a worthwhile investment.

Defining 360-Degree Feedback

A defining feature of 360-degree feedback is that the feedback has been solicited from multiple points of view, not just the employee’s direct supervisor, as is typical of more traditional employee performance assessments. Although the term 360-degree feedback is often used interchangeably with multisource (or multi-rater) feedback, some (e.g., Foster and Law 2006) have argued that 360-degree feedback should be viewed as merely one type of multisource feedback. Under this stricter definition, multisource feedback that includes fewer sources (such as just peers and supervisors) or alternative sources (such as customers or clients) would not technically constitute 360-degree feedback. That is, 360-degree feedback would be limited to only feedback that is solicited from self, peers, superiors, and subordinates, as illustrated in Fig. 1. Nevertheless, many organizations and researchers continue to use the term 360-degree feedback to refer to all multisource feedback tools even when it strays from this strict definition.
Fig. 1

Traditional sources of 360-degree feedback

Design Considerations

There is no ideal 360-degree feedback design. Nevertheless, there are features of the design process that clearly distinguish well-designed 360-degree assessments from poorly designed ones. For example, well-designed 360-degree assessments are built with careful attention to the following:
  • Linking assessment content to a job analysis. Which dimensions are most appropriate to include in the 360-degree assessment should be determined, at least in part, based on what dimensions are required on the job. This is especially relevant when the feedback will be used for personnel actions.

  • Aligning it with the organization’s goals and values. For example, if the organization aims to use the tool to develop their leaders, the design features selected should support that use (e.g., keeping ratings anonymous, keeping feedback confidential, offering one-on-one coaching). Additionally, if the organization has a clearly specified set of values it expects leaders to follow, including items, evaluating a leader’s adherence to those values would make sense.

  • Establishing buy-in from leadership, raters, and ratees. If the culture and climate in an organization is not supportive of a 360-degree assessment, it will likely not prove useful.

  • Designing valid and reliable survey items. Psychometric measurement properties of the assessment should be considered and explored during the development process.

  • Pilot testing the instrument. A pilot test can help improve the tool and identify potential problems before it is officially launched. For example, it can be used to explore how much time raters spend completing it, user reactions to the tool, poorly worded or confusing items, gaps in content, or dimensions that are not applicable to the job in question.

  • Measuring effectiveness. As discussed further below, not all 360-degree assessments are successful at improving behavior. Including measures of the impact of 360-degree feedback on performance can be critical to determining whether the tool is a success.

There are also numerous design decisions that need to be made along the way. In many cases, some approaches or design features are more strongly supported by experts than others, but the best choice will also often depend on the organization’s goals for the 360-degree assessment. The following are just a few examples of the types of design decisions that need to be made:
  • Purpose (developmental vs. evaluative). An organization should decide in advance whether the assessment is going to be used for development, personnel actions, or both. Experts are generally more likely to recommend using 360-degree assessments for development than for evaluation. However, it is possible to use 360-degree assessments for evaluative purposes. See below for more discussion on this.

  • Performance dimensions to be assessed. The dimensions of performance to be assessed in the 360-degree assessment should be clearly defined and items should be designed to tap those dimensions. The dimensions identified should be job relevant (i.e., they should apply to the job in question), observable by others, and (as noted above) tied to the organization’s goals for the assessment.

  • Number of items. The number of items should depend on the number of performance dimensions to be evaluated as well as the total time to complete the rating tool. There should be enough items such that each dimension is measured adequately, but the tool should not be so long that raters are unwilling to complete it thoughtfully.

  • Rater selection. Raters are commonly selected randomly, nominated by an employee’s supervisor or nominated by the ratee. When used in personnel actions, random selection or selection by a supervisor can help ensure fairness of the ratings. For developmental 360-degree assessments, however, allowing the supervisor or ratee to nominate others can be useful especially if they choose people who they believe will have constructive or informative responses. Nonrandom selection does, however, make comparison of average ratings across ratees less informative, as each ratee or supervisor may use different criteria to select the raters. Regardless, raters who have had sufficient opportunity to observe the individual’s performance in the domains of interest should be targeted.

  • Minimum number of raters. Three hundred sixty-degree assessments often require a minimum number of subordinate or peer raters before results will be disclosed to the ratee. This serves two purposes. First, if anonymity is being promised to raters, then establishing a minimum number of raters can help protect that anonymity. Second, raters – even within the same rater type (e.g., multiple subordinates or multiple peers) – often disagree about a ratee’s performance. This can occur for any number of reasons (not having had an adequate chance to observe their performance, observing different aspects of their performance, interpreting the rating scales differently, etc.). As a result, ratings from multiple raters are typically collected and averaged to achieve a more reliable and ultimately valid impression of someone’s performance. Although establishing a minimum is recommended, there is no clear consensus on what that minimum number of raters should be.

  • Coaching. The goal of coaching in a 360-degree assessment is to help the employee use the feedback to create lasting improvements in their performance. Experts agree that one of the keys to ensuring that 360-degree assessments are effective at improving performance is to include the use of trained coaches, regardless of who provides the coaching. Some 360-degree feedback efforts offer one-on-one coaching with someone external to the organization, some provide in-house trained coaches, and some rely on supervisors to serve as the coaches.

Using 360-Degree Feedback for Development Versus Evaluation

Three hundred sixty-degree feedback was originally designed for the use as a development tool; however, increasingly organizations are using it for personnel actions (such as pay raises, promotion decisions, annual performance reviews, etc.) as well. This runs counter to recommendations by experts who typically strongly advise against using 360-degree feedback for personnel actions and instead recommend that it continue to be used solely for developmental purposes.

The primary reason that experts advise against using 360-degree assessments for personnel actions is that it can undermine people’s willingness to provide honest feedback. If raters do not feel comfortable providing honest feedback about an employee, then the feedback may end up being useless. When performance ratings are provided from the top down, as in traditional performance appraisal systems, there is far less concern that honesty will be impacted. However, when peers and subordinates are asked to provide ratings they know will lead to high-stakes personnel actions (such as changes to the person’s pay or potential for promotion), they may be motivated to alter their ratings to varying degrees.

In some cases, the resulting ratings may be too lenient, and in other cases, they may be too harsh. For example, peers might inflate ratings of those they consider friends; on the other hand, they might rate employees that they view as competition more negatively. Ratings by subordinates face similar issues. For example, they may provide higher ratings out of fear of retaliation or to earn a supervisor’s favor. Alternatively, they may provide excessively harsh ratings to retaliate for a supervisor’s actions against them. Given these ulterior motives, when ratings from these sources are intended for use in high-stakes decisions, they are questionable at best.

In spite of the expert cautions, however, many organizations do use 360-degree feedback for more than just development. For example, a 2013 survey of organizations that use 360-degree assessments (see 3D Group 2013) suggests that while very few use them exclusively for personnel actions, about a third use them for both development and personnel actions. And such mixed use (i.e., use for both development and personnel actions) can be even more problematic than using it for personnel actions alone.

For example, it can be especially problematic when an organization starts out offering 360-degree assessments solely for development but changes the purpose later. Employees often grow to like 360-degree assessments for development, when they trust that the results will be used solely for that purpose. If, however, employees believe that a 360-degree feedback tool is being used for developmental purposes, only to discover later that the company intends to use it for more than that, they can feel that their trust has been violated. Once that trust in the process is lost, raters may be less likely to provide honest ratings or to provide ratings at all. Such loss of trust in the process is one way that 360-degree assessments lose their value over time. Once that trust has been lost, it can be very difficult to regain it, even if the organization decides to return to a development only model.

Arguments Against Using 360-Degree Feedback

Although their use is widespread, there are several legitimate reasons that an organization may opt not to pursue a 360-degree assessment. First, as noted previously, there are strong criticisms of the use of 360-degree feedback for personnel actions, some of which center on the fact that rater honesty can be compromised. As discussed above, when results of 360-degree assessments are high stakes (e.g., when they factor into annual performance reviews or are used for raises, bonuses, or promotions), raters may manipulate their ratings of others out of fear of retaliation or for personal gain. Supervisors can also attempt to even the score with subordinates who provide negative ratings in a variety of ways – demoting them, transferring them, giving them less desirable assignments, or even firing them.

Keeping ratings anonymous can help mitigate these issues; however, it cannot eliminate them. For example, some may still fear that their ratings will be identifiable by inference, even if rater identities are not disclosed, and toxic leaders may still take punitive action against suspected subordinates, regardless of whether or not they did, in fact, provide negative ratings. In addition, when ratings are anonymous, it can be difficult for an employer to successfully defend a personnel action, in the event that it is challenged.

But, even for development, their value is not always guaranteed. As a result, experts caution that 360-degree assessments need to be designed and used thoughtfully. The biggest issue experts point to when cautioning against overuse of 360-degree assessments or use of poorly designed ones is cost.

Using off-the-shelf tools can help defray some of those costs; however, they may not be aligned with an organization’s unique culture or goals, which could reduce the relevance and usefulness of the tool and undermine buy-in from participants. As a result, many organizations choose to build their own. But creating the tool – building the survey interface, deciding how to display the results, etc. – can be expensive.

There are also administration costs. Requests for rater inputs, assistance with an online interface, monitoring the rating process timeline, establishing policies to protect anonymity, training raters, fielding questions, etc. – these are all examples of factors that have to be managed as part of a 360-degree feedback effort.

Another cost consideration is coaching. One-on-one coaching for a few employees could be a reasonable expense; however, as the number of employees receiving 360-degree feedback increases, the cost of coaching increases as well. For that reason, many organizations forgo one-on-one coaching. Unfortunately without such coaching, 360-degree feedback may be ineffective at encouraging behavior change, which would make justifying the other, potentially very large costs associated with the tool difficult.

But by far, the largest cost incurred during the administration of 360-degree assessments is the time spent by the raters. The process of completing a 360-degree feedback form can be time intensive, particularly when it solicits written comments in addition to numerical ratings and when the raters take the time to be thoughtful in the ratings and review comments. Add to that the fact that multiple peers, supervisors, and subordinates are asked to contribute to each person’s 360-degree assessment, and the number of man-hours spent on reviews can add up quickly.

Effectiveness

Although many organizations have adopted 360-degree assessments, research suggests they may not always be effective at changing employee behavior (Bracken and Rose 2011). While experts recommend that organizations measure effectiveness of their 360-degree feedback efforts, most organizations do not follow through on that recommendation. That is, 360-degree assessments are often designed and implemented without a plan for evaluating their success. As a result, data are not typically available to explore this issue. Instead, most organizations focus their evaluation efforts exclusively on whether users (raters and/or ratees) are satisfied with the process. Research does show that employees typically respond positively to 360-degree assessments that are well designed, aligned with the organizations goals, and administered for developmental purposes. They tend to respond more negatively to them when they are used for personnel actions or are excessively time-consuming and burdensome for raters to complete.

Conclusion

Three hundred sixty-degree assessment can be a useful form of employee developmental feedback; however, its effectiveness can hinge on how carefully the tool is designed and how well it is tested. The sections above included examples of several key design features to attend to and decisions to be made about them. Given that such design features can ultimately make or break the success of a 360-degree feedback effort, involvement of experts – especially those knowledgeable about psychological measurement principles (i.e., validity and reliability) – in the design process is important for ensuring a successful 360-degree feedback effort. Implementation of 360-degree assessments for personnel actions is generally not recommended, at least not until an organization has thoroughly considered the consequences and challenges that come with such an implementation.

Given the expense associated with a well-designed instrument, when organizations do decide to implement them, they should use them sparingly, especially if effectiveness of the tool has not yet been demonstrated. This issue of cost is particularly relevant for consideration in public administration contexts as taxpayer dollars are at stake and good stewardship of resources is expected. On the other hand, well-designed 360-degree feedback tools could, in fact, provide unique and valuable insights for developing our public servants. As such, the potential benefits of 360-degree feedback in public administration contexts should not be dismissed simply because it can be expensive. Instead, the cost of a 360-degree assessment should be one of many factors weighed in the decision to proceed.

Cross-References

References

  1. 3D Group (2013) Current practices in 360-degree feedback: a benchmark study of North America Companies. 3D Group, EmeryvilleGoogle Scholar
  2. Bracken DW, Rose DS (2011) When does 360-degree feedback create behavior change? And how would we know it when it does? J Bus Psychol 26(2):183–192CrossRefGoogle Scholar
  3. Bracken DW, Timmreck CW, Church AH (eds) (2001) The handbook of multisource feedback. Jossey-Bass, San FranciscoGoogle Scholar
  4. Foster CA, Law MR (2006) How many perspectives provide a compass? Differentiating 360-degree and multi-source feedback. Int J Sel Assess 14(3):288–291CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Pardee RAND Graduate SchoolRAND CorporationSanta MonicaUSA