Risk perceptions are beliefs about potential harm or the possibility of a loss. It is a subjective judgment that people make about the characteristics and severity of a risk.
The degree of risk associated with a given behavior is generally considered to represent the likelihood and, given its occurrence, the consequences of harmful effects that result from that behavior. To perceive risk includes evaluations of the probability as well as the consequences of an uncertain outcome. There are three dimensions of perceived risk – perceived likelihood (the probability that one will be harmed by the hazard), perceived susceptibility (an individual’s constitutional vulnerability to a hazard), and perceived severity (the extent of harm a hazard would cause). Risk perceptions are central to many health behavior theories. For example, models that have been developed specifically to predict health behavior such as the health belief model (Rosenstock 1966), protection motivation theory (Rogers 1975), and the self-regulation model (Leventhal et al. 1980) all contain constructs that explicitly focus on risk perceptions. In addition, other models such as the theory of reasoned action (Fishbein and Ajzen 1975), the theory of planned behavior (Ajzen 1985), and social cognitive theory (Bandura 1977) also include perceptions of risk indirectly via other constructs.
Biases and Heuristics in Risk Assessment
The estimation of risk tends to be a complex process that depends on factors such as the context in which the risk information is presented, the way the risk is being described, and also on personal and cultural characteristics. Tversky and Kahneman (1973) proposed that when faced with the difficult task of judging risk, people use a limited number of strategies, called heuristics, to simplify these judgments. These heuristics can be useful shortcuts for thinking for most of us most of the time, but they may lead to inaccurate judgments in some situations in which case they become cognitive biases. There are three broad biases that can affect risk perceptions, and these are the availability heuristic, the representativeness heuristic, and the anchoring and adjustment heuristic.
Availability heuristic is a phenomenon in which people predict the frequency or likelihood of an event, or a proportion within a population, based on how easily an example can be brought to mind (Tversky and Kahneman 1973). We make decisions based on the knowledge that is readily available in our minds rather than examining all the alternatives. Most of the time, our brains use the availability heuristic without us even realizing it. Often this gives our brains the quick shortcut to the answer we need, and in many cases the judgments are accurate. However, as with any shortcut, sometimes the availability heuristic can lead us to make mistakes. Some events are easier to recall than others, not because they are more common but because they stand out in our minds.
Representativeness heuristic is an occurrence in which individuals assess the frequency of a particular event based solely on the generalization of a previous similar event (Gilovich et al. 2002). Naturally, relying on past experiences can be beneficial and allow for quick conclusions to be reached, but the cost of being able to make quick decisions is oftentimes accuracy. The fact that a mental representation, which can be compared to a new situation, exists in your memory does not have any bearing on how likely that representation is to occur in reality.
Anchoring and adjustment heuristic is a phenomenon in which people start with one piece of known information, known as an anchor, and then adjust said information to create an estimate of an unknown risk (Epley and Gilovich 2006). The adjustment, however, is usually conservative, and hence, the final judgment is usually biased toward the anchor.
Psychometric Paradigm of Risk Assessment
The “psychometric paradigm” developed by Slovic, Fischhoff, and Lichtenstein was a landmark in research about public attitudes toward risks (Fischhoff et al. 1978, 1983; Slovic et al. 1980, 1982, 1985). These studies demonstrated that the public is not irrational. Ordinary people simply use a broader definition of “risks” than experts when making their judgments about which ones are of most concern to them. “Experts” base their risk ratings on the expected number of fatalities. “Lay people,” in contrast, have a richer definition of risk. This incorporates a number of more qualitative characteristics such as “voluntariness” (whether people have a choice about whether they face the risk), “immediacy of effect” (the extent to which the effect is immediate or might occur at some later time), and “catastrophic potential” (whether many people would be killed at once). Slovic et al. (1985) identified and analyzed 18 characteristics of this kind using factor analysis and found that they could be resolved into three factors broadly defined as “dread,” “unknown,” and “exposure.” Further, high perceived risk, and hence a desire for societal regulation, was associated. Psychometric paradigm assumes that with appropriate design of survey instruments, many of these factors can be quantified (Slovic 1992).
Research within the psychometric paradigm turned to focus on the roles of affect, emotion, and stigma in influencing risk perception. Psychometric research identified a broad domain of characteristics that may be condensed into three high-order factors: (1) the degree to which a risk is understood, (2) the degree to which it evokes a feeling of dread, and (3) the number of people exposed to the risk. A dread risk elicits visceral feelings of terror, uncontrollability, catastrophe, and inequality. An unknown risk is new and unknown to science. The more a person dreads an activity, the higher its perceived risk and the more that person wants the risk reduced (Slovic et al. 1982).
References and Further Reading
- Fischhoff, B., Slovic, P., Lichtenstein, S., Read, S., & Combs, B. (1978). How safe is safe enough? A psychometric study of attitudes towards technological risks and benefits. Policy Studies, 9, 127–152.Google Scholar
- Fishbein, M., & Ajzen, I. (1975). Belief, attitude and intention and behavior: An introduction to theory and research. Reading: Addison-Wesley.Google Scholar
- Leventhal, H., Meyer, D., & Nerenz, D. (1980). The common sense representation of illness danger. Medical Psychology, 2, 7–30.Google Scholar
- Plous, S. (1993). The psychology of judgment and decision making. New York: McGraw-Hill.Google Scholar
- Slovic, P. (1992). Perception of risk: Reflections on the psychometric paradigm. In S. Krimsky & D. Golding (Eds.), Social theories of risk (pp. 117–152). Westport: Praeger.Google Scholar
- Slovic, P. (2000). The perception of risk. Sterling: Earthscan.Google Scholar
- Slovic, P., Fischhoff, B., & Lichtenstein, S. (1985). Characterizing perceived risk. In R. W. Kates, C. Hohenemser, & J. X. Kasperson (Eds.), Perilous progress: Managing the hazards of technology. Boulder: Westview Press.Google Scholar
- Wildavsky, A., & Dake, K. (1990). Theories of risk perception: Who fears what and why? American Academy of Arts and Sciences (Daedalus), 119, 41–60.Google Scholar