1 Introduction to Risk Analysis

Risk is an inherent component of human existence, as is our creation of ways to avoid or minimize such risks. The formal process of assessing the likelihood and magnitude of risk, using that information to manage risk, and then communicating the process to others, forms the basis for risk analysis. Risk analysis pertaining to food safety is usually conducted by national, regional, and international agencies (FAO/WHO 2006), with an ultimate goal of protecting human health by producing safer food and reducing cases of human illness (FAO/WHO 2006; FAO/WHO 2003; CAC 1999). Risk analyses also help set priorities, as they can indicate where actions are most needed; which actions would be most effective at reducing risk; and where more research may be needed to fill knowledge gaps (FAO/WHO 2006). Risk analysis has classically consisted of three fundamental components : risk assessment, risk management, and risk communication (FAO/WHO 2008; CAC 1999). Knowledge generated from risk assessment is intended to drive the decisions made by risk managers and the information shared by risk communicators (CAC 1999). After a brief introduction of the latter components, this chapter will focus predominantly on risk assessment.

1.1 Risk Management

Risk management has been defined as “The process of weighing policy alternatives in the light of the results of risk assessment and, if required, selecting and implementing appropriate control options, including regulatory measures” (CAC 1999). It can be simplified to mean actions taken to mediate risks (Jaykus et al. 2006). The most important component of risk management is that the decision-making process for risk managers should be transparent and be based directly upon scientific findings of the risk assessment (CAC 1999; Wilde 2013). Undertaking a risk assessment before making a risk management decision is an excellent way of achieving this goal, and hence the assessment itself should be linked to a very clear risk management question. Understandably, risk managers have to balance multiple factors, such as overarching economic capabilities, the available technology, and the current social, political, and legal status of the issue at hand (EPA 2000). While risk management decisions are influenced by myriad factors, risk assessment is intended to be conducted based solely on science and hence independent of outside influences (Wilde 2013).

1.2 Risk Communication

The risk assessment process generates a significant amount of information, usually quite technical in nature. As intimated above, risk assessment is only one factor that goes into decision-making to manage risk. The third component of risk analysis, risk communication, is the sharing of risk assessment and management information among stakeholders, the general public, and other interested or affected groups (Jaykus et al. 2006), as well as among risk assessors, managers, and communicators themselves (EPA/USDA-FSIS 2012). The diversity of audience means that risk communication messages and methods must be carefully tailored to different audiences. The process may involve the creation of visual and written materials for distribution, but is more often two-way discussions with stakeholders, where the communicator appears as a credible source (Lundgren and McMakin 2013). Risk communication for human enteric pathogens is typically a planned event performed by trained professionals (FAO/WHO 2006). Aside from providing a clear explanation of the chosen risk management actions, this communication should also explain why the hazard poses a significant health risk, what populations are most at risk, what assumptions were made in the risk assessment process, where uncertainties arose in the assessment or management process, and why some actions were chosen over others (EPA/USDA-FSIS 2012; FAO/WHO 2006).

1.3 Risk Assessment

Assessment of the likelihood and magnitude of disease risk associated with a hazard is done virtually every day by most people. However, the exercise as a defined, systematic, and (frequently) quantitative entity emerged in the 1960s and 1970s in relationship to risks associated with aerospace and nuclear disasters, where their use was quite controversial. Since that time the process of risk assessment has been embraced by a number of US federal agencies, including the Occupational Safety and Health Administration, the Environmental Protection Agency (EPA), and the Departments of Agriculture, Defense, and Energy (NRC 2009; EPA 2000). For example, the US EPA has used the method from the time of its establishment in 1970. In 1983, the National Academy of Sciences (NAS) published a consensus document entitled: Risk Assessment in the Federal Government: Managing the Process (NAS 1983), which has since been referred to as the “Red Book”. This document established “standard operating procedures” for risk assessment and associated reporting.

Through the 1980s and early 1990s, EPA risk assessments remained focused on chemical (toxicological) agents. By the mid-1990s, there was increasing interest in applying probabilistic risk assessment principles to microbiological hazards, both in the environmental and food sectors. While the risk assessment paradigm is the same regardless of the nature of the health risk, the details differ. For example, chemicals remain stable, degrade, or perhaps change composition from the time of their deposition in the environment to human exposure, while microbes can multiply, remain stable, or even grow along the food chain. Cancer health risk modeling, which involves chronic low-level exposures, is very different from modeling the acute exposures associated with most human pathogens. Hence, in the mid-1990s, microbiological risk assessment diverged from chemical risk assessment and became its own discipline. In the last 20 years, many microbiological risk assessments have been undertaken in both environmental and food sectors.

2 Microbial Risk Assessment

Microbial risk assessment (MRA ) is an integral component of food and water safety management (Bartell 2010) and is considered by the Food and Agriculture Organization (FAO) of the World Health Organization (WHO) to be the method of choice for making decisions to control foodborne disease and ensure the safety of food and water supplies (FAO/WHO 2003; FAO/WHO 2006). The principal aim of MRA is to provide an objective, transparent, evidence-based assessment of the health risk of multiple exposure pathways or scenarios. The strength of MRA is illustrated in its ability to support scenario analyses, which facilitates comparison of the efficacy of different combinations of risk management interventions (Jaykus et al. 2006).

While most MRAs focus on bacterial pathogens of concern, there is an increasing interest in performing risk assessments for enteric viruses (FAO/WHO 2008). This is important because measures used to control or prevent bacterial contamination of food and water are not always effective for control of viruses (FAO/WHO 2008; CAC 2012). Microbial risk assessments for viruses have some key differences from those classically performed for chemical hazards, with added considerations such as virus decay, varying susceptibility to disinfectants, host immunity and susceptibility, differences in clinical symptoms and health outcomes (including the potential for asymptomatic infections), genetic diversity and the emergence of novel viral strains, and multiple potential routes of exposure (EPA/USDA-FSIS 2012).

3 Process of Risk Assessment

Risk assessment is a tool used to solve complex problems (Jaykus et al. 2006). It serves to fill our collective need to make decisions regarding health despite uncertainties in current knowledge (FAO/WHO 2006) and the fact that individual research findings are rarely sufficient to make management decisions (NRC 2009). The formal risk assessment process consists of four components: hazard identification, exposure assessment, hazard characterization, and risk characterization. Many of the widely-accepted definitions relating to food safety risk assessment came out of the 22nd meeting of the Codex Alimentarius Commission (CAC) in 1997 following consultations with FAO/WHO (1995), which laid significant groundwork for the international performance of risk assessment in foods.

Risk assessments can be done at many different levels for many different purposes. At the federal level, they are performed to characterize a risk either qualitatively (if there are few data) or quantitatively (if there are enough data). This can be done, for instance, to rank relative risks; prioritize risk management efforts; evaluate, on a preliminary basis, the efficacy of candidate risk mitigation strategies (through sensitivity and scenario analysis); and identify data and research needs. At the international level, risk assessments are usually used in the development and enforcement of food safety standards, particularly those involving trade (FAO/WHO 2008; CAC 1999). For example, all member nations of the World Trade Organization must partake in risk assessments in regards to their food safety and plant and animal health under Article 5 of the current Agreement on the Application of Sanitary and Phytosanitary Measures (“SPS Agreement”) (WTO 1995). Risk assessments performed on behalf of international health organizations such as the FAO or WHO are generally used to inform the CAC and national governments for large-scale management of specific food-related hazards (FAO/WHO 2006).

In all cases, the risk management question(s) to be addressed by the risk assessment must be clearly delineated before the process begins lest the risk assessment veer off on to an irrelevant path. The prime directive of a risk assessment is that it should be objective, unbiased, and based in sound science (FAO/WHO 2006; NRC 2009). Ideally, in the case of foodborne pathogens, it should also take the entire food production process into account (FAO/WHO 2006), such as growing, harvesting, processing, distributing, and other actions taken before the food reaches the end consumer, also referred to as a “farm-to-fork continuum.” It is not uncommon, particularly in large-scale assessments, for formal risk assessment teams to be formed. The most effective risk assessment teams are multidisciplinary, combining the expertise of epidemiologists, biostatisticians, researchers, and other specialists (FAO/WHO 2006). While risk assessments are typically portrayed as linear steps, the process should actually be seen as a series of continuous feedback loops, evolving as new information is made available. The term “iterative” is frequently used to describe this process (Jaykus et al. 2006).

4 Structure of Risk Assessment

4.1 Hazard Identification

Hazard identification is “The identification of biological, chemical, and physical agents capable of causing adverse health effects, which may be present in a particular food or group of foods” (CAC 1999). Put simply, hazard identification involves gathering information from the body of literature to draw general conclusions about the association between a hazard and food(s), and factors impacting that association. It is generally a qualitative process with information on the hazard(s) relating to a food or practice coming from scientific literature; reports from outbreak investigations; industry-specific databases; national and international health surveillance data; consumer surveys and statistics; and consultation with experts in the field (CAC 1999).

A risk profile is a common outcome of the hazard identification step (Bouwknegt et al. 2013) and can form the basis for embarking on a full risk assessment (FAO/WHO 2006). Not every hazard and/or food merits a full risk assessment; sometimes a risk profile is sufficient to direct risk management activities (Jaykus et al. 2006; FAO/WHO 2006). Risk assessments are more likely to be undertaken when the means of exposure to a hazard is complex or poorly understood; the hazard is of significant concern to public health, regulatory, or stakeholder groups; and/or there is a need to justify whether a standing or proposed management practice is effective and has scientific merit (FAO/WHO 2006). A risk profile for a foodborne pathogen usually includes information on the current situation or status of the hazard as related to a specific product or commodity; how consumers become exposed; what factors are involved with the exposure; how the hazard might enter and affect a population, including symptoms, course of disease, and outcomes; what parts of the food production process should be evaluated in assessing risk; and how the public perceives the risks (FAO/WHO 2006; CAC 1999).

4.2 Exposure Assessment

A widely accepted definition for exposure assessment is “The qualitative and/or quantitative evaluation of the likely intake of biological, chemical, and physical agents via food as well as exposures from other sources if relevant” (CAC 1999). An exposure assessment for food or waterborne pathogens is often an evaluation of the likelihood of the actual or anticipated intake of the pathogen in a population or certain subgroups (FAO/WHO 2003). The likelihood of exposure is determined by a chain of events which, for a full risk assessment, usually encompasses all of the steps from production to consumption (CAC 1999). Using enteric viruses in fresh produce as an example (Table 17.1), this might include considering the possibility of contamination in production waters; irrigation practices; transmission of viruses during growing or harvest; washing conditions (if applicable); persistence of the hazard on or in the produce through the food chain; and food storage, preparation, and consumption practices. An exposure assessment benefits from quantitative information about the virus of concern including its concentration in the environment and on/in foods, how often environmental contamination occurs, persistence (or growth) of the virus in the environment or foods, and portion sizes and frequency of food consumption. The end product of the exposure assessment for quantitative risk assessment is a mathematical model expressing the likelihood and magnitude of exposure to the hazard across the farm-to-fork continuum.

Table 17.1 Examples of issues to address for exposure assessment associated with enteric virus risk in fresh produce

4.3 Hazard Characterization

Hazard characterization is “The qualitative and/or quantitative evaluation of the nature of adverse health effects associated with the hazard” (CAC 1999). Ideally, hazard characterization for an infectious agent relates to its infectivity (infectious dose); population susceptibility (genetically and/or immunologically derived); severity and endpoints of disease; and exposure dose (which is the endpoint of exposure assessment). When considering enteric viruses, especially norovirus, other factors might be important to include in hazard characterization, such as strain-specific differences in infectivity or disease severity; likelihood for evolution into new strains; modes of transmission (vomitus or fecally-derived); and propensity for secondary transmission (CAC 1999). As it is meant to be comprehensive, a hazard characterization for a pathogen may be developed in one country and used in a risk assessment performed in another country, or one performed for a waterborne pathogen may be adapted for a food-based exposure (FAO/WHO 2003).

Relating health outcomes to known levels of exposure is known as a dose–response relationship or dose–response assessment (CAC 1999). This is the ultimate product of quantitative hazard characterization and an example is shown graphically in Fig. 17.1. The dose–response curve allows for estimation of the probability of specific health impacts (e.g., infection, symptomatic illness, hospitalization, death, chronic disease) based on given exposure levels (EPA/USDA-FSIS 2012). Under ideal circumstances, dose–response data are derived from human populations. Of course, availability of direct human challenge data is scant for diseases that cause severe illness (FAO/WHO 2008). Nonetheless, some human dose–response information is available for enteric viruses such as norovirus (Teunis et al. 2008) and rotavirus (Ward et al. 1986). In the case of hepatitis A virus, the dose–response relationship is usually extrapolated from a human challenge study with echovirus 12 (Thebault et al. 2012; Schiff et al. 1984). Even with such human exposure information, only a few data points are available and hence modeling is quite crude. In the absence of human challenge data, information from epidemiology, animal feeding studies, and even laboratory-based studies on virulence or infectivity in mammalian cell culture can be used. In an ideal world, factors such as strain-to-strain differences in infectivity and population susceptibility are included in dose–response modeling, but frequently data with this level of granularity are not available.

Figure 17.1.
figure 1

Rendering of dose–response curves with confidence intervals for Norwalk virus (without virus aggregation) and hepatitis A virus (based on a Beta-Poisson model of echovirus 12 dose–response data). Y-axis on left for norovirus infection, y-axis on right for hepatitis A symptoms. Image adapted from those presented in Teunis et al. (2008) and Thebault et al. (2012).

4.4 Risk Characterization

The risk characterization step combines the output of exposure assessment and hazard characterization to provide a formal evaluation of the risk (usually expressed as risk of infection or disease) from exposure to the hazard. The risk characterization process results in the formation of a risk estimate, which gives the overall likelihood and severity of the adverse effects expected in the target population (CAC 1999). As the name implies, a quantitative microbial risk assessment (QMRA) involves generating numerical values of risk for microbial agents, often using simulated models (CAC 1999; FAO/WHO 2008). In food safety, a full QMRA is usually conducted from a beginning phase in the farm-to-fork continuum, and ends with consumption of the product. However, partial QPRAM (e.g., addressing just one phase of the continuum) is also done.

Monte Carlo simulation is by far the most common approach to risk characterization modeling and is often used in exposure modeling as well (Mokhtari and Jaykus 2009; Thebault et al. 2012). This should be apparent in several of the articles referenced later in this chapter (Mokhtari and Jaykus 2009; Thebault et al. 2012; Schijven et al. 2013; Mara et al. 2007; Mara and Sleigh 2010; Petterson et al. 2001; Bouwknegt et al. 2015). Named for a famous casino in Monaco, Monte Carlo relies on sampling random (or pseudorandom) numbers, generated like the outcomes from a roulette wheel (Zio 2013; Amar 2006), and hence is probabilistic in nature. Basically, each input or variable in the exposure or risk model is estimated as a distribution, rather than a single “point.” The appropriate distribution is determined from the data source(s) or else by assumption. Then, for each risk calculation or “simulation,” the input values are chosen randomly from each distribution and a single risk number is produced. This process is repeated multiple times or as multiple “iterations” (usually 10,000) (Amar 2006). Each of these is its own miniature risk assessment so to speak, and the results of all the iterations together are graphed as a distribution of probabilities and risk.

Monte Carlo simulation-based models, which rely on expressing input values as distributions, are particularly appealing because they can represent variability and uncertainty. Variability refers to data heterogeneity or variation; it is an inherent property of the data and cannot be reduced by further measurement (Suter 2006). The term variability usually applies to a well-characterized dataset. Uncertainty is basically a “lack of knowledge” or absence of data (NRC 2009; Zio 2013). While it can be reduced by further study or data collection, many input variables in a risk assessment are simply uncharacterized but still need to be included in the model. However, since their data are often assumed or at best approximated, they can lead to inaccuracy and bias in risk estimates. Remembering that simulated outcomes may not hold true in the real world, it is best to validate mathematical models with known disease incidence data or real-life information (FAO/WHO 2006). Epidemiological data are frequently used to do such validations. However, this can be difficult for viruses as there are no international requirements to report foodborne viral disease outbreaks and the vast majority are unrecognized or uninvestigated (FAO/WHO 2008).

Under the best of circumstances, risk characterization should be accompanied by sensitivity and scenario analyses (CAC 1999). In sensitivity analysis, one input variable is toggled between its low and high value keeping all the others at median, followed by simulation. This is systematically done for each input such that there is a range of low and high risk associated with each input. Sensitivity analysis identifies which variables provide the largest range of risk estimates, and the model is hence more “sensitive” to those inputs. In short, sensitivity analysis determines how changes in the entered parameters affect the results of a model (FAO/WHO 2006; CAC 1999; Zio 2013), which can be used to identify inputs having the greatest impact on the magnitude of differences in risk estimates. This can allow the user to make inferences about real-world events. Frequently, the most sensitive variables are those that also drive the risk, and hence these are good candidates as the focus of mitigation strategies (Barker 2014).

In scenario analysis, model inputs are changed based on how the user anticipates they might be impacted by a candidate mitigation strategy. The simulations are then rerun, and the risk estimates produced are compared to those of the baseline model. In this way, the user can estimate how one or more candidate control measures might impact risk, allowing for comparative analysis and a scientific, risk-based approach to decision-making. Sensitivity analysis also enables the user to simulate an array of feasible scenarios and to estimate which single or combined management actions are likely to have the biggest impacts on risk (Zio 2013), or little impact at all. In addition, both sensitivity and scenario analyses can help to identify inputs having the greatest level of uncertainty. These are usually areas in need of additional research or data collection.

Monte Carlo simulation has proven extremely useful in QMRA because it is able to integrate complex data sets in a systematic manner, thereby incorporating variability and uncertainty in model design and risk estimates. Ultimately, this provides risk estimates that more closely mimic what might happen in real-life situations (Zio 2013; Amar 2006). Historically, Monte Carlo-based calculations have required immense programing and computing power, but technology advances have allowed widespread use of this modality (Zio 2013).

5 Elements of Risk Assessment in Food Virology

Understanding that many of the world’s current food safety measures may not be effective for preventing enteric viral diseases, an expert panel met at the request of the WHO and the FAO in 2008 to discuss the importance of viruses in foods. The attendees compiled a list of foodborne viruses of primary concern on a global scale, based on criteria such as the health and economic costs of disease, incidence and prevalence, level of difficulty in their control, and impacts on trade. The final list included hepatitis A virus (HAV), human norovirus (NoV), and human rotavirus (HRV) (the three most important viruses), along with some emerging viruses of concern: hepatitis E virus (HEV), highly pathogenic avian influenza virus (HPAI-H5N1), SARS-coronavirus, and Nipah virus. The group also identified virus-food commodity pairings of concern based on the current body of knowledge. The most important were NoV and HAV in bivalve shellfish (including oysters, clams, cockles, and mussels), fresh produce, and prepared foods. Lastly, the expert panel identified three major routes of viral contamination of foods: human sewage and feces, infected food handlers, and animals (in the case of the zoonotic viruses) (FAO/WHO 2008). The working group concluded that a comprehensive QMRA of any one virus-commodity combination was not currently possible, given the lack of quantitative data required for such assessments (FAO/WHO 2008). They did believe that lessons learned from similar consultations on bacterial agents, and the general concepts of risk assessment, were appropriate for evaluating viral foodborne disease risks if adequate supporting data were available (FAO/WHO 2008).

The lack of knowledge about foodborne viruses has been an ongoing issue for performing QMRAs (Bouwknegt et al. 2013; HPA 2004). Human NoV has been particularly challenging to assess because it cannot be propagated in cell culture, which has historically limited research investigations to molecular testing, the use of cultivable surrogate animal viruses, and human challenge studies (Atmar et al. 2008; FAO/WHO 2008; CAC 2012; HPA 2004). The use of surrogates requires making the assumption that they behave in a manner similar to human NoV, which is not necessarily the case (CAC 2012). Perhaps most importantly, it is generally recognized that the cultivable surrogates do not uniformly (or arguably, adequately) mimic human NoV resistance to disinfection (Hoelzer et al. 2013).

Although enteric viruses cannot multiply in foods, they are able to persist in the environment and are also quite resistant to most sanitizers and disinfectants used at regulated or manufacturer-recommended concentrations (FAO/WHO 2008). Hepatitis A viru s, for example, can persist on fresh produce for longer than the item’s shelf life (Croci et al. 2002; Sun et al. 2012). Enteric viruses also tend to be highly infectious and are shed into the environment in large quantities in vomit and stool. Norovirus has a low infectious dose (perhaps ≥ 18 viral particles) (Teunis et al. 2008). Millions to billions of virus particles may be shed per gram of stool, and for prolonged post-symptomatic periods (4–8 weeks) (Atmar et al. 2008). People may also be asymptomatic carriers of the virus and viral shedding can begin before the onset of clinical signs (Atmar et al. 2008. This suggests that a person may spread the virus to others before they even know they are infected. Additionally, while there are some common virus-food commodity pairings associated with disease, the wide variety of foods that can become contaminated, in addition to the many ways that foods can be produced, processed, and prepared, makes for many, many contamination scenarios. These are just a few of the intricacies of foodborne enteric virus transmission and illnesses that complicate efforts to accurately model and estimate potential risks to human health (FAO/WHO 2008).

5.1 Hazard Assessment, Risk Profiles, and Meta Analysis

In 2004, the Health Protection Agency (now part of Public Health England) conducted a feasibility study on future QMRAs related to enteric viruses. This was a large-scale project, involving the compilation and evaluation of a comprehensive body of information, the creation of a database to store the information, and an outline of the parameters needed for qualitative and quantitative risk assessments. The team evaluated transmission via person-to-person contact, bivalve shellfish, salad vegetables, and fruit. The final outcomes were suggestions for future research. In 2009, the U.S. Food and Drug Administration (FDA) posted a risk profile for HAV infection associated with the consumption of fresh and fresh-cut produce (FDA 2009) and continues efforts in developing a comprehensive hazard identification on foodborne NoV; although at the time of this writing, no formal report or publication was available. The Dutch Food and Consumer Product Safety Authority has documented evidence relevant to the transmission of HAV in shellfish, NoV in fresh fruits and vegetables, and HEV in pork (Bouwknegt et al. 2013), while the New Zealand Food Safety Authority sponsored a risk profile on NoV in raw molluscan shellfish (ESR 2009).

More recently in 2014, the US National Advisory Committee on Microbiological Criteria for Foods (NACMCF) competed its response to questions posed by food safety regulatory bodies regarding control strategies for reducing foodborne NoV infections, which could be considered a type of risk profile (NACMCF 2014). Some recent systematic reviews and meta analyses have been produced to address questions regarding persistence, resistance, and infectivity of NoV (Hoelzer et al. 2013), illustrating that comprehensive data collection and analysis can be used to begin to answer key questions and identify future research needs.

5.2 Data for Exposure Modeling

Many types of data are needed to support exposure modeling including information on transmission dynamics (e.g., source and prevalence of contamination and virus transferability, etc.), virus behavior (persistence in foods and the environment, resistance to inactivation strategies, etc.), food consumption (e.g., which foods, how often consumed, serving sizes, etc.). While discussing all of these is beyond the scope of this work, they have constituted active areas of research in the last 5–10 years. By way of example, initial studies on the prevalence of human NoV (and sometimes HAV) contamination in molluscan shellfish in the U.S. (DePaola et al. 2010) and fresh-cut produce in Canada (Mattison et al. 2010) have been supplemented by more recent contamination prevalence studies, also focused on molluscan shellfish (Brake et al. 2014; Loutreul et al. 2014; Rodriguez-Manzano et al. 2013; Suffredini et al. 2014; Schaeffer et al. 2013) and fresh produce (berries and lettuce) (Loutreul et al. 2014; Pérez-Rodríguez et al. 2014; Maunula et al. 2013; De Keuckelaere et al. 2014). There is increasing interest in contamination of beef and pork with rotavirus and/or HEV (Jones et al. 2014; Wilhelm et al. 2014). Such studies have been facilitated by the availability of improved, and in some cases, more standardized methods for detecting viral contamination in foods.

Other areas of active research have been transferability of NoV and HAV in the food production and processing chain (Grove et al. 2015; Escudero et al. 2012; Tuladhar et al. 2013), virus environmental persistence (Mormann et al. 2015; D’Souza et al. 2006; Fallahi and Mattison 2011; Liu et al. 2009); virus resistance to sanitizers and disinfectants, (Cromeans et al. 2014; Tung et al. 2013; Park and Sobsey 2011), and virus inactivation strategies (Jean et al. 2011; Fino and Kniel 2008; Leon et al. 2011). Further details of these are discussed elsewhere in this book.

5.3 Predictive Microbiology

While any risk model could be called predictive in nature, predictive microbiology is a particularly important tool in exposure assessment. In food microbiology, predictive modeling refers to the use of mathematical models to quantitatively predict the behavior of microbes in a given environment (Pérez-Rodríguez et al. 2014). Its use usually focuses on environmental persistence and resistance/inactivation by physical, chemical, or biological means. There are a number of instances in which predictive modeling has been applied to laboratory data on virus persistence and/or inactivation, for example feline calicivirus (FCV, a human NoV surrogate) and HAV with high hydrostatic pressure (Buckow et al. 2008; Kingsley et al. 2006); depuration kinetics of HAV and murine norovirus (MNV-1; another human NoV surrogate) in shellfish (Polo et al. 2015); heat inactivation of MNV-1, FCV, and HAV in deli meat by heat (Bozkurt et al. 2015); to name a few. These can all be helpful in exposure assessment depending upon the application.

In one particular study, Espinosa et al. (2012) determined the inactivation kinetics of poliovirus and rotavirus on lettuce and spinach using electron beam (E-beam) radiation. The authors then used these data to estimate a theoretical reduction in infection risk associated with this treatment. The experimental portion yielded D10 values of 1.0–1.3 kGy for rotavirus and 2.3–2.4 kGy for poliovirus. The risk model included parameters associated with serving size, initial virus contaminant concentration (ranging from 100 to 103 PFU/g), output from the kinetic inactivation model, and dose–response (Beta-Poisson for rotavirus and an exponential model for poliovirus). Reductions in infection risks varied widely (from negligible to over 104) as a function of risk assessment parameters. By way of example, treatments of 3 kGy with a starting virus population on lettuce of 10 PFU/g reduced the risk of poliovirus infection from consumption of contaminated lettuce from a baseline of >20 infections to 6 infections per 100 individuals. Under similar circumstances, rotavirus risk associated with consumption of contaminated spinach dropped from >30 infections to 5 per 100 persons. This paper provides an example in which laboratory-based work was combined with a relatively simple risk model to produce estimates of risk reduction as a function of a virus inactivation strategy.

In a similar study, Praveen et al. (2013) determined the inactivation kinetics of HAV and MNV-1 in oysters treated by E-beam irradiation at a dose of 5 kGy. The authors then used these data to estimate a theoretical reduction in infection risk associated with this treatment. Mean D10 values of 4.05 and 4.83 kGy were calculated for MNV-1 and HAV, respectively. The risk model included the same parameters as used in the previous study but adjusted for the product (oysters) and the hazard (Beta-Poisson model used for both). The model predicted that if the product were contaminated at a concentration of 105 infectious units per 12 raw oyster serving size, a 5 kGy treatment would result in a 12–16 % reduction in infection for both viruses. At only 102 infectious units per serving, NoV infection risk reduction remained relatively stable (26 %) but a 91 % reduction in HAV infection risk was predicted. This study showed that even at high E-beam doses, the viruses of greatest public health significance could not be eliminated from oysters, and at high E-beam doses, risk reduction was minimal.

5.4 Hazard Characterization

Most human data to support enteric virus dose–response relationships are old and/or scant. For example, human challenge studies f or poliovirus (vaccine strain) (Katz and Plotkin 1967; Katz and Price 1967; Minor et al. 1981; Lepow et al. 1962), rotavirus (Ward et al. 1986), and echovirus 12 (Schiff et al. 1983; Schiff et al. 1984) are available, as are relevant animal challenge data for HAV (in nonhuman primates) (Purcell et al. 2002). Perhaps the first significant work, in which human challenge studies were used as the basis of dose–response modeling, is that of Haas (1983). This investigator compared three commonly used dose–response models with experimental data for poliovirus (three datasets) and echovirus 12 (one dataset). Specifically, the log-normal, simple exponential, and a Beta-distributed model were evaluated. The former two are deterministic models and the latter a stochastic model. For deterministic models, parameter values are determined at the outset and hence the model does not consider randomness or uncertainty. The output is hence “determined.” Stochastic models allow for randomness in one or more inputs and must be analyzed statistically. The outcome is not a single value, but a group of values.

For three of the four virus datasets analyzed by Haas (1983), all three models provided satisfactory fit and produced ID50 (50 % infectious dose) values for each relevant enteric dataset that were quite similar. However, there were significant differences in model predictions at low doses, in which case the Beta and exponential models produced more conservative (higher) disease risks. This provided impetus for the rather wide use of Beta-distributed models for hazard characterization of many of the enteric viruses.

Human NoV challenge studies have been reported in the recent literature (Atmar et al. 2014; Teunis et al. 2008; Thebault et al. 2013) and several others have been completed but not yet published. Atmar et al. (2014) estimated an ID50 for Norwalk virus of 3.3 reverse transcription PCR units, which corresponded to between 1320 and 2800 genome equivalent copies, but the study did not include extensive mathematical modeling. The Teunis et al. (2008) work was the first to analyze human challenge study data to produce dose–response models for the prototype human NoV, the Norwalk virus. These investigators sought to determine the probability of infection in participants based on the hit theory, which considers microbial infection as a result of a chain of conditional events, i.e., (1) ingestion of one or more organisms based on an inoculum having a Poisson distribution; and (2) successful navigation of the organism(s) through all host barriers with maintenance of infectivity. A beta-distributed probability is used to model the latter. Particle aggregation was taken into account mathematically and was fit to the experimental data. Probability of illness given infection was modeled by logit transformation of infection and aggregation distributions followed by production of maximum likelihood estimates. In susceptible individuals, the ID50 was about 1000 and 18 genome copies for aggregated virus and disaggregated virus, respectively. The investigators concluded that that the average probability of infection for a single infectious particle could be as high as 0.5. Probability of illness given infection was also dose-dependent, and ranged from 0.1 to 0.7 for 103 and 108 genome copies, respectively (Fig. 17.1).

Thebault et al. (2013) used outbreak data to design a human NoV dose–response model. Specifically, they used data from five oyster-associated outbreaks in which the exposed population and attack rates were characterized; the number of oysters consumed was known; and the concentration of NoV determined from oysters associated with the outbreak. They used a Beta binomial distribution to model infectivity that took into account heterogeneity of the host-pathogen relationship and non-uniform distribution of the pathogen in the food. Bayesian modeling was then done to estimate model parameters and predict probabilities. Median ID50 estimates were 1.6–7.1 genome copies per oyster. The median probability of infection for secretor-positive individuals (which are susceptible to NoV infection, unlike secretor-negative individuals) exposed to a single virus genome was around 0.29 (95 % CI 0.015–0.61) for GI human NoV and 0.4 (0.040–0.61) for GII. Illness probability was 0.13 (0.007–0.39) and 0.18 (0.017–0.42) for GI and GII, respectively. Statistically, there was no difference between infectivity of GI and GII human NoV. As expected, secretor-negative individuals had much lower probabilities of infection and illness. Overall, these data are in relative agreement with those of Teunis et al. (2008), confirming the exquisitely low infectious dose for this group of viruses.

Both Teunis et al. (2008) and Thebault et al. (2013) noted the issue of heterogeneity in response, which is not really considered in current dose–response modeling efforts. Such heterogeneity is mediated by a number of factors, including secretor status (innate, genetically predetermined immunity) and exposure-driven immunity. The ability of exposure to one human NoV strain to protect against infection with another, but only sometimes, also complicates modeling efforts. There are likely strain-to-strain differences in innate susceptibility and disease outcomes (disease severity) but these are also poorly characterized. In most instances, the inability to consider these factors in dose–response modeling means that current models tend to be more conservative (overestimate infection or disease risk) for normal, healthy individuals. This is probably not the case for sensitive subpopulations (young children and the elderly).

Nonetheless, there are now solid human NoV dose–response estimates, which will likely be refined as data from GII.2 and GII.4 challenge studies are further analyzed.

Modeling the dose–response relationships for human rotavirus and HAV are more challenging. Probit analysis of human challenge study data for rotavirus suggested an ID50 of about 10 infectious units with an estimate that one infectious unit should infect about 25 % of susceptible adults (Ward et al. 1986). Espinosa et al. (2012) used the Beta-Poisson model for rotavirus, as described by Haas et al. (1999). Schiff et al. (1983, 1984) used probit analysis of human feeding study data for echovirus 12 to estimate that ingestion of 1–2 infectious units of this virus would infect 1 % of the population. Echovirus 12 is frequently used as a dose–response surrogate for HAV. Pintó et al. (2009), Praveen et al. (2013), and Thebault et al. (2013) all used a Beta-Poisson model to estimate HAV risks based on the work of Rose and Sobsey (1993) (Fig. 17.1). On the other hand, Bouwknegt et al. (2015) used the rotavirus data of Ward et al. (1986) and a modified exponential model in their HAV dose–response modeling. In some instances, HAV risks are expressed as infection, in others, as disease (usually jaundice).

6 Recent Recent Risk Modeling Efforts in Food Virology

Most risk modeling efforts in food and environmental virology have focused on water, fresh produce, molluscan shellfish, and prepared foods. From a food perspective, these studies are described in greater detail below and summarized in Table 17.2.

Table 17.2 Summary of risk assessment research performed for foodborne and waterborne viruses in since 2000, in chronological order

6.1 Fresh Produce

Contact with sewage-contaminated water , or handling by infected food handlers practicing poor personal hygiene (contact during picking, packing, and/or food preparation) is thought to be the main route of viral contamination for fresh produce, though specific data on each are lacking (FAO/WHO 2008). There are also no universal or far-reaching guidelines on the types of water used for irrigation, and some areas of the world have higher risks of contamination of agricultural waters with human sewage (FAO/WHO 2008). Since fresh produce is often consumed raw, without prior treatments that could inactivate enteric viruses, prevention of contamination is usually considered the best option for a safe final product (Bouwknegt et al. 2015). In this section, we discuss risk-based studies focusing on irrigation waters and the farm-to-fork chain.

6.1.1 Irrigation with Wastewater or Recycled Water

Hamilton et al. (2006) developed a QMRA model for enteric viruses (as a group) in raw vegetables irrigated with non-disinfected, secondary-treated reclaimed water. The group chose this as a worst-case scenario: eating vegetables raw (cucumber, broccoli, cabbage, and lettuce) after irrigating them directly overhead with virus-contaminated water. Besides p roduct type, simulations also included varying virus levels in effluent and various time since last irrigation. Data for enteric virus concentrations in non-disinfected secondary effluent came from comprehensive monitoring of sewage treatment plants. The variation in viral contamination (as a function of volume of water caught by the plants) was taken from previous studies using rotavirus for lettuce and cucumbers, and through field trials conducted by the authors on water retention for broccoli and cabbage. The timing variable was based on reported data on natural viral decay coefficients. Exposure was tied to consumption of the produce items and a Beta-binomial dose–response model was used based on previous work, which was considered representative of enteric viruses in general. The results were compared to a standard EPA benchmark of 10−4, or one infection or fewer in every 10,000 people consuming treated water each year (EPA 1989).

Time since last irrigation was consistently significant in affecting the calculated annual risks of human infection, with estimates of 10−3–10−1 for contaminated irrigation ending a day before harvest, down 10−9–10−3 for the irrigation ending two weeks before harvest (Hamilton et al. 2006). Overall, the only cases that met the benchmark level of risk were those where the irrigation ceased two weeks before harvest. Based on sensitivity analyses, the most significant area of uncertainty in the models was the amount of produce the individual consumed. The results led the authors to conclude that a withholding period for the use of wastewater for irrigation prior to harvest could be used for risk mitigation. Since this set of simulations represented the worst-case scenario, alternative irrigation methods and post-harvest washing and disinfecting would likely result in further risk reduction.

Mara et al. (2007); Mara and Sleigh (2010) published two QMRA studies on risks to consumers following wastewater irrigation of crops, with an emphasis on simulating conditions in developing countries. In their first study, they characterized rotavirus infection risk associated with consumption of wastewater-irrigated lettuce. Parameters included virus numbers using Esherichia coli concentration as a reference; environmental persistence of rotavirus; and human consumption. The Beta-Poisson model was used for estimating the rotavirus dose–response relationship and risk was expressed as infection per person per year (pppy). A tolerable risk of 10−2 pppy based on modification of the WHO drinking water recommendations was chosen for this study (WHO 2004). Risks fell within a range of 10−4–10−2 pppy when wastewater standards ranged from 103 to 105 CFU E. coli/100 ml, respectively. These results are consistent with the WHO standard of <103 fecal coliform/100 ml for unrestricted irrigation of salad crops and vegetables (WHO 1989), which would theoretically correspond to a risk of about 10−4 pppy. These findings were also in relatively good agreement when compared to epidemiological data collected from an outbreak.

In their second study, Mara and Sleigh (2010) estimated the risk of NoV infection associated with consuming wastewater-irrigated lettuce using methods and parameters similar to the ones above, with minor modifications and some different assumptions. For instance, the dose–response model was based on the work of Teunis et al. (2008) and a 1.1 × 10−3 pppy was designated as the tolerable norovirus disease risk based on loss in Disability Adjusted Life Years (DALYs), adapted from WHO wastewater guidelines (WHO 2006). Wastewater quality ranging from 106 to 108 CFU E. coli/100 ml resulted in median NoV infection risk of 1 pppy. A risk close to the tolerable level occurred at 101–102 CFU E. coli/100 ml. The investigators concluded that, if wastewater treatment and post-treatment together resulted in a 5–6 log10 reduction in E. coli, consistent with the 6-log10 reduction in rotavirus provided by the WHO (2006), it should be possible to achieve adequate NoV inactivation. This assumed that a hurdle type approach was used, one that relied on moderate inactivation during wastewater treatment and a high degree of inactivation during post-treatment, e.g., natural die-off and produce washing and/or disinfection.

Petterson et al. (2001, 2002) performed a “screening-level” risk assessment for the consumption of lettuce irrigated with secondary-treated sewage. They examined two primary factors: the quantity of human enteroviruses in irrigation water and the loss of viable virus particles on lettuce over time (up to 14 days post irrigation). They integrated data from the literature concerning enterovirus concentrations in secondary-treated effluent; the rate of decay of enteric viruses on the crop and during storage; the amount of water (and hence virus) that attaches to the surface of lettuce; lettuce consumption rates; and dose–response information, extrapolated from known data on rotaviruses. The levels of enterovirus in effluent were derived directly from treatment plant data. A bacteriophage provided the estimate for viral decay post-irrigation. Although initially the investigators reported risks higher than the EPA standard of 1 case per 10,000 people per year consuming finished (treated) water (EPA 1989), in a later erratum, they noted a miscalculation in the bacteriophage inactivation parameters, making the actual rate of decay more rapid. This would mean that estimates of infection would have been reduced and actually fallen below the EPA benchmark of 1 in 10,000 cases, highlighting the importance of virus decay rates in calculating risk.

Barker (2014) performed a very comprehensive QMRA to estimate NoV gastroenteritis risks associated with consuming vegetables irrigated with highly treated municipal wastewater in Melbourne, Australia. The study focused on vegetables that are typically eaten raw and irrigated with recycled water, specifically broccoli, cabbage, lettuce, and cauliflower. The author used published information on the prevalence and amount of NoV in raw sewage (based on sampling or epidemiology) in Melbourne; surveys on produce washing in Melbourne households; and even analyzed their own samples of water. Other inputs included the duration of NoV shedding; viral decay after a holding period (0, 1 days); and a reduction in virus numbers following a potential washing step, among others. A predictive model for efficacy of wastewater treatment was developed and a Beta-Poisson dose–response model was used based on the work of Teunis et al. (2008). Modeled scenarios included variations in water quality, time of year, type of vegetable, time from last irrigation, and qualities relating to consumers (i.e., washing practices, foods consumed, and body mass).

There were large differences in prevalence and concentration of human NoV as a function of estimation method (sampling vs. epidemiological) (Barker 2014). This resulted in highly variable annual disease burden estimates ranging from a low of 10−15 to a high of 10−6 DALYs/person/year. The sampling method provided much lower estimates (by 4–5 log10) than the 10−6 threshold, expressed as DALYs/person/year. The epidemiological method produced risk estimates that were occasionally up to 2 log10 DALYs/person/year above baseline. A third method, considered the most representative and which included an adjustment factor for NoV prevalence, produced disease burden estimates >2 log10 DALYs below threshold.

Lettuce carried the highest risk of all the produce types, but realistically, it also probably had the highest rate of consumption of the four items (Barker 2014). The daily probability of developing illness was most affected by the cumulative water treatment impacts on decreasing the viral load, followed by consumption rate, and the reduction in viral load when vegetables were washed. The initial parameters for estimating the concentration of NoV in both raw and treated sewage were the most important source of variability. In conclusion, the author opined that washing of vegetables and an irrigation withholding period before consumption were the most likely actions to significantly reduce the risk of NoV gastroenteritis. Their results also suggested the current water reuse procedures in Melbourne did not pose an increased risk of disease.

6.1.2 Fresh Produce Along the Farm-to-Fork Chain

In the only study of its kind, Bouwknegt et al. (2015) developed a farm-to-fork QMRA model to quantify the relative importance of potential contamination routes along the fresh produce supply chain. Raspberry and salad vegetable supply chains were modeled and risks associated with NoV, adenovirus (as a general indicator of human fecal matter), and HAV were evaluated. Conceptually, the model was broken down into production (including irrigation water and harvesters’ hand modules) and processing (including hands, rinse water, and cross-contamination by c onveyor belts modules). Virus inactivation was followed through each module, as appropriate. Three salad vegetable supply chains and two raspberry supply chains were modeled, each consisting of different combinations of inputs into each module. Due to lack of supporting data, the investigators did not consider certain contamination routes, including direct human fecal contamination in growing fields or use of contaminated pesticides. Contamination during food preparation in consumer kitchens was also not considered.

Data for some model parameters, such as potential contamination points and viral contamination levels, came from the European VITAL project (Bouwknegt et al. 2015). Information on food handling practices, efficacy of virus transfer and removal in different settings, virus persistence over time, and others, were derived from the scientific literature. The hypergeometric dose–response model of Teunis et al. (2008) was used for NoV infection, and the exponential dose–response model of Haas et al. (1999) was used for HAV jaundice (disease).

Overall, the simulations showed very low risks. In fact, no pathogenic viruses were predicted to be found in the berry supply chain and risk per serving of lettuce was around 3 × 10−4 (6 × 10−6 to 5 × 10−3) for NoV and 3 × 10−8 (7 × 10−10 to 3 × 10−6) for HAV (Bouwknegt et al. 2015). However, a main source of uncertainly was in the dose–response models, not unusual for this type of work. The model demonstrated that hand contact led to more virus contamination of produce than irrigation water, wash water, or cross-contamination via conveyor belt. Sensitivity analysis results differed by product but, in general, the model was most sensitive to virus concentration at potential contamination points and virus removal efficiency (for rinsing steps). The investigators concluded that encouraging best practices in hand hygiene for this product sector would lead to optimized food safety outcomes.

6.2 Molluscan Shellfish

Fecal contamination of harvesting areas is the predominant route of enteric virus introduction in shellfish (FAO/WHO 2008). Bivalve mollusks, such as oysters, are known to bioaccumulate human enteric viruses in their gastrointestinal tracts, allowing them to become contaminated when grown in areas impacted by human fecal matter (Maalouf et al. 2011). In contrast, the chances of direct human contact with shellfish leading to contamination (e.g., from an infected individual shucking oysters) is considered to be relatively low (FAO/WHO 2008). One major data availability issue is that shellfish production and monitoring has historically been regulated based on the fecal indicators of water, such as levels of fecal coliforms or E. coli (FDA 2011). The presence or concentration of these fecal indicators does not correlate with viruses, and there are limited data on contamination prevalence in the absence of regular screening of shellfish and their waters for human enteric viruses.

Pintó et al. (2009) used a QMRA approach to estimate the levels of HAV in frozen, imported Peruvian coquina clams that were associated with actual outbreaks. Initially, they used molecular amplification methods to estimate virus numbers in the implicated product from two outbreaks. These data were then used to estimate exposure risk by mathematical modeling that included variables related to methodology (i.e., adjustments for recovery efficiency and infectivity); virus concentration reductions due to cooking; prevalence of contamination; and consumption. Choice of the best-fit dose response model was done using a combination of the microbiological data, outbreak attack rates, and previously reported enteric virus models associated with human challenge studies. When a Beta-Poisson model was applied to echovirus 12 data, the researchers were able to estimate infection risks corresponding to consumption of lightly cooked clams on a per-batch basis, which matched the corresponding epidemiological data. A correlation between the prevalence of HAV cases and positive virus detection (44 % of samples positive) in clams associated with outbreaks was demonstrated, as juxtaposed to the absence of virus detection in clams randomly tested from the same batches as the outbreak-associated clams. This led the investigators to discuss the value of setting critical limits for potential viral contamination sources discharged into growing waters, and using targeted, direct, quantitative viral testing of shellfish to manage the situation when these critical limits are exceeded.

Thebault et al. (2012) performed a QMRA for HAV in shellfish harvested from contaminated production areas. Their final output was an overall annual risk of contracting symptomatic HAV among adult consumers of raw oysters in France. They generated two scenarios for mathematical modeling: one simulating a brief period of contamination (incidental) and the other for regular or prolonged contamination periods (endemic). Variables were similar to those described above for Pintó et al. (2009), including the use of a Beta-Poisson dose–response model corresponding to echovirus 12. Risk reduction was calculated as percent of baseline. Seasonal variation in oyster consumption was also considered.

Using the QMRA, Thebault et al. (2012) compared fourteen surveillance and risk management practices. These were subdivided into several major strategies: one using E. coli as an indicator; another using HAV testing with or without confirmation and at various frequencies; and the last being controlled purification with or without virus testing. The mitigation strategies were further subdivided based on parameters such as sensitivity of detection, confirmation of results, frequency of sampling, number of negative results before reopening for harvesting, and time to action. Direct HAV monitoring resulted in greater risk reduction than did the use of conventional bacterial indicators. In both contamination scenarios, twice monthly virus testing was an effective risk management strategy, avoiding about 40–50 % of the baseline cases. When contamination was accidental and homogeneous, waiting for three negative test results to reopen an area for harvesting was not effective in risk reduction. However, when contamination was endemic, waiting for the three negative test results was effective in preventing human cases. Any control measures that could reduce contamination by at least 2 log10 units (e.g., improving sanitation, harvesting from lower risk areas) resulted in the greatest risk reduction (87–88 %). This exercise is a good example of how QMRA can be used to aid in evaluation of candidate risk mitigation approaches.

6.3 RTE Foods and Food Handling

Ready-to-eat (RTE ) foods, in this case defined as products that undergo extensive human handling without a terminal heating step, are the most significant vector for transmission of foodborne viral illness. In fact, food handler contact with raw and RTE foods was the most common source of foodborne outbreaks in the US from 2001 to 2008 (Hall et al. 2012), and was implicated in 70 % of NoV outbreaks linked to a contaminated food from 2009 to 2012 (Hall et al. 2014). RTE foods have also been associated with a number of high-profile outbreaks (Friedman et al. 2005; Malek et al. 2009; Becker et al. 2000). An infected individual practicing poor personal hygiene and especially, engaging in bare hand contact while preparing food, is the most likely cause of this phenomenon. However, such individuals can also contaminate utensils, preparation surfaces, restroom surfaces, and play areas, etc. Since human NoV is released and likely aerosolized during projectile vomiting, this can also serve as a source of virus to contaminate foods (de Wit et al. 2007; Patterson et al. 1997).

Mokhtari and Jaykus (2009) created a probabilistic exposure assessment that modeled the dynamics of transmission of human NoV in the retail food preparation environment. The model was conceptualized in accordance with the personal hygiene risk management triad that is based on the interrelationships among contaminant source, cross-contamination, and hygiene efficiency and compliance. Using the restroom environment as a reservoir, the dynamics of NoV transmission were modeled between employees’ hands, food contact surfaces, and food products. Key model inputs included degree of fecal shedding, hand hygiene behaviors, efficacy of virus removal and/or inactivation, and transferability of virus between surfaces. The model was temporal in nature, beginning with an infected food handler failing to practice adequate personal hygiene, and following his/her movement through the restaurant environment, including food preparation, for an 8-hour shift.

From the model, the researchers determined key risk factors in food preparation that resulted in significant NoV contamination of foods, defined as >10 infectious particles per serving (Mokhtari and Jaykus 2009). Not unexpectedly, the simulations showed the highest virus levels to be on hands, followed by surfaces and gloves, implicating hands as the most important mode of transmission. Gloving and handwashing compliance were found to be the most important practices for preventing contamination of foods when an infected worker was present on-site. Sensitivity analysis revealed that the mass of feces on hands, the concentration of NoV in the stool of ill individuals, the number of bathroom visits by the employee, the level of compliance with glove use, and handwashing efficiency and compliance were the inputs having the greatest impact on risk. A novel aspect of this study was consideration of the joint effects of handwashing compliance and gloving compliance, or handwashing compliance and efficacy. Using what-if scenario analysis, the authors demonstrated that combinations of compliance, gloving, and efficacy are critical to keeping contamination levels at <10 infectious particles per serving. In short, one intervention alone would not result in elimination of significant NoV contamination in the food if an infected food handler were present on the premises. Hence, control measures should take a multi-pronged approach.

Stals et al. (2015) produced a quantitative exposure model to simulate transmission of NoV from the hands of infected workers to deli sandwiches at a sandwich bar. In their model simulations, three employees performed their duties using shared cutlery and a shared work surface during a three-hour shift. The model structure was quite similar to that of Mokhtari and Jaykus (2009) although it was designed to accommodate NoV-contaminated lettuce as an ingredient. Many of the variables were also similar to those of Mokhtari and Jaykus (2009) and their values came from the published literature. The group also performed a two-week observational study at two deli establishments to estimate number of contact events between hands, foods and surfaces during food preparation. Four possible interventions were considered: hand disinfection with an alcohol-based sanitizer; surface disinfection (with a cloth containing disinfectant or no disinfectant); no bare hand contact (glove use); and handwashing after restroom use.

Simulations revealed that a single infected food handler readily transferred viruses to hands, surfaces, and sandwiches. On the other hand, use of contaminated lettuce in sandwich making yielded much lower numbers of virus particles on food. In a worst-case scenario in which two contamination sources (infected food handler and contaminated lettuce) were present and no intervention was used, most (96 %) of the sandwiches contained the pre-determined ID50 of >18 virus particles. Of the individual intervention measures considered, hand-washing following restroom use was the only one that had a substantial impact on transmission of virus to hands, surfaces, and foods, and the degree of impact very much depended on compliance. For example, at low and intermediate degrees of compliance with handwashing, the fraction of deli sandwiches containing >18 virus particles was 91 and 65 %, respectively. If high handwashing compliance was followed, no sandwiches exceeded the ID50. Of course, using all four measures effectively reduced NoV concentrations on sandwiches to negligible levels (<7 virus particles).

6.4 Synthesis Comments

Some common conclusions can be made based on the risk-related modeling studies described in this chapter. The most compelling consensus conclusion comes from several models applied to virus infection or disease risks associated with the use of reclaimed water for irrigation of food crops (Hamilton et al. 2006; Mara et al. 2007; Mara and Sleigh 2010; Petterson et al. 2001, 2002; Barker 2014). Virtually all studies confirmed that there was minimal risk to human health from these practices, particularly when treated to achieve 4-log10 or more reduction in enteric viruses (EPA 1989). In the only production/processing risk assessment of its kind, Bouwknegt et al. (2015) demonstrated that HAV and NoV risks associated with the consumption of berries and lettuce were minimal, and when elevated, hand contact was a much more significant source of virus contamination than was irrigation water or the produce washing process. Models applied to molluscan shellfish (Thebault et al. 2012; Pintó et al. 2009) focused on addressing questions associated with direct testing for virus contamination for harvest water classification and managing disease risk. Both studies concluded that such testing would be appropriate under certain circumstances, particularly when contamination was continuous.

The two studies that addressed retail food handling (Mokhtari and Jaykus 2009; Stals et al. 2015) concluded that hands of infected workers not practicing adequate personal hygiene is the predominant source of virus contamination of RTE foods. These studies were unique in their modeling of single and combined risk mitigation strategies, concluding that compliance with handwashing was the most effective control measure, but noting that a combined approach (e.g., including sanitation and gloving) would be necessary to reduce virus concentrations in these foods to levels associated with negligible risk.

7 Conclusions

The application of risk assessment principles to understand the dynamics of virus transmission via the food supply, estimate risk to human health, and evaluate potential mitigation strategies, is a relatively new area that evolved directly from earlier work in QMRA related to water. A small body of risk assessment work on foodborne viruses has, however, emerged in the last decade. Initially constrained by a paucity of data to support values and distributions for key inputs, studies on prevalence, persistence/resistance, transferability, and other environmental features of these viruses have been recently published. New human challenge studies using GII.2 and GII.4 strains have been completed (although not yet reported), which should improve hazard characterization. Standardized test methods, improved ability to discriminate virus infectivity status using molecular methods, and better surrogates are moving the field along as well. As national and international regulations begin to be promulgated, there will be an increasing incentive to perform QMRA as applied to viruses in foods.

Despite new data and models, there will continue to be hurdles. For example, we have no idea if virus-to-virus or strain-to-strain differences have any impact on likelihood of exposure or degree of public health risk. In addition, we now know that susceptibility to human NoV strains is, in part, genetically mediated but only a few dose–response models consider this fact. Hepatitis A and rotavirus vaccines are now widely used and likely provide life-long immunity, reducing the size of the susceptible population. Again, this is not usually considered in hazard characterization. Estimates of infectivity can be created, but until a cultivable human NoV strain is found, they will remain estimates. There are many different foods, human populations, and production/processing/preparation techniques, meaning that a “one size fits all” model is not really feasible. Consequently, risk assessments will be diverse and will continue to be subject to inclusion of poorly characterized or incomplete data, and assumptions. However, risk assessment remains a valuable means by which to integrate science in support of risk-based decision-making. In the absence of a crystal ball, is there anything better?