1 Introduction

1.1 Epidemics and Human History

Infectious diseases pose an ever-present danger to human societies. Despite tremendous advances in medical care, roughly one quarter of worldwide human deaths are attributed to infectious and parasitic disease (Mathers et al. 2008). Several seemingly unalterable aspects of urban life, including long-distance travel and dense human contact networks, facilitate outbreaks from both known and newly evolved pathogens.

Epidemics are defined as widespread occurrences of infectious disease in a community at a particular time, and the 14th century bubonic plague, or “Black Death”, was the most devastating epidemic in human history (Benedictow 2004). Death rates were as high as 25–60 % in Europe, Africa, and Asia from a disease caused by a bacterial infection (Yersinia pestis) that persists in rodent populations and is transmitted by fleas to humans. Close contact between humans and rats and worldwide travel contributed to the global impact of bubonic plague which appears to have originated in Asia and traveled to Europe via trade routes (especially rat-infested ships).

The most destructive modern pandemic was the 1918 influenza that infected one third of the world’s population (about 500 million) and killed 50–100 million between January 1918 and December 1920 (Taubenberger and Morens 2006). “Spanish influenza”, as the disease was named, is caused by the H1N1 virus which is endemic in pigs and birds and often transitions into human populations. The lethality of the 1918 strain was high and showed an unusual relationship between lethality and patients’ age: 50 % of deaths were in the 20–40 age group which is the opposite pattern for milder flu strains (higher mortality among the very young and the aged). This partly reflected the impact of WWI where contagion was passed among troops both in training facilities as well as during warfare. However, the strain also had an unusual and lethal property; virulence was enhanced by a human immune over-reaction called a “cytokine storm” which causes the lungs to fill with liquid. Some important aspects of the epidemic were: a deadly pathogen arose from a jump from animal to human (close between-species interactions were important in the origin of the virus), a few mutations were sufficient to confer strong lethality for the virus, and human travel allowed rapid spread (close quarters and massive troop movements helped to spread the virus and allowed new mutations to spread quickly).

This chapter will focus on biological factors that are relevant for understanding and controlling epidemics. We will briefly describe some pathogens that cause human disease and their transmission mechanisms before analyzing the SARS 2002–2003 epidemic as a case study of a modern urban epidemic. Disease models will be discussed with a goal of determining how human societies can prepare to minimize the impact of future disease outbreaks.

1.2 Pathogens and Transmission Mechanisms

Infectious diseases can be classified into two broad categories based on their pattern of transmission (Table 1). “Long-range” infectious diseases are infections that do not require close contact for transmission. For example, water-borne diseases, such as cholera, can rapidly spread throughout the community when the supply of drinking water becomes contaminated with the pathogen Vibrio cholerae through poor sanitation or hygiene practices. Food-borne infections follow a similar transmission pattern to water-borne diseases. Transmission through contaminated food and water is also known as “fecal-oral transmission” because fecal matter is often the source of contamination while oral ingestion is the primary route for infection (Mount Sinai Hospital 2007). Diseases transmitted by an animal vector, such as bubonic plague, are also considered long-range infections because a vector facilitates the spreading of the pathogen and direct contact is not necessary. One interesting aspect of some vector-borne infections is that direct contact with an infected individual cannot transmit the infection without the help of the vector. For example, Dengue fever, caused by a mosquito-borne virus, can only be transmitted through the bite of an infected mosquito (US Centers for Disease Control and Prevention 2014). In contrast, plague, caused by bacteria living in fleas of rodents, is primarily transmitted through flea bites, but contact with contaminated body fluids like blood can also lead to plague bacterial infection (US Centers for Disease Control and Prevention 2015a).

Table 1 Classification of selected infectious diseases based on their mode of transmission

In general, fecal-oral and vector-borne diseases are infections transmitted through an environmental (water, food) or a biological (animal) carrier that extends transmission range to large distances, but other routes are also possible depending on the specific pathogen.

Compared to long-range diseases, “short-range” infectious diseases are infections that transmit over limited distances and may require close or direct physical contact with an infectious individual. Examples of short-range infections are pathogens that infect via contaminated airborne particles or expectorated droplets, and diseases that require contact with skin or bodily fluids such as blood or semen. Infections capable of airborne transmission have the widest range among short-range infections and are caused by pathogens that spread through minute solid or liquid particles suspended in the air for an extended period of time (Mount Sinai Hospital 2007). In addition, the pathogen must be resistant to desiccation to remain viable for long periods of time outside its host. Respiratory diseases are commonly believed to spread via airborne transmission of contaminated particles expectorated from coughing and sneezing. However, many respiratory pathogens do not have the capacity to withstand dry environments. Instead, these pathogens transmit via “droplets”—expectorated moisture particles that are too big to indefinitely remain suspended in the air—to ensure ample moisture while outside the host. Transmission occurs when contaminated droplets from an infected individual come in contact with surfaces of the eye, nose, or mouth. This mode of transmission is called “droplet contact”. Although diseases spreading via droplet contact have a more limited range than truly airborne infections, in the later sections, we will show how environmental factors can extend the range of droplet transmission. Finally, diseases that transmit via direct contact generally have the most limited transmission range and some have stringent requirements for transmission. In the case of Ebola, the disease is transmissible only via direct exposure of broken skin or mucous membranes with contaminated body fluids like blood, urine and semen, and excretions such as vomit and feces (US Centers for Disease Control and Prevention 2015c). Sexually transmitted diseases like HIV/AIDS are a special form of direct contact infection that requires sexual intercourse or sharing contaminated needles for exposure (US Centers for Disease Control and Prevention 2015b). Thus, short-range infections are characterized by some dependence on distance for infection and can be transmitted directly without a carrier.

Distinguishing between these two classes is important because measures to alleviate and control the spread of long-range infections are not applicable for short-range cases and vice versa. For instance, targeting the carrier or vector of the disease to control the spread of long-range infections (e.g., decontaminating or blocking off access to contaminated water or food) and reducing exposure to vectors of the disease, are irrelevant for mitigating the spread of short-range infections. In contrast, measures to control short-range diseases such as limiting person-to-person contact and imposing quarantine procedures do little to help alleviate the spread of water-borne or vector-borne illnesses. Thus, identifying the mode of transmission is crucial to controlling the spread of any contagious infection. However, we will show that the distinction between long- and short-range transmissions is not always clear-cut.

In this chapter, we focus on the emergence and spreading of Severe Acute Respiratory syndrome or SARS; the first worldwide pandemic in the age of globalized air travel and telecommunications. Through theoretical analyses and data gathered from the epidemic, we examine how globalization exacerbates the problem of containing epidemics and show how urban environments can be especially prone to epidemics. The emergence and control of the SARS epidemic is extensively documented. Research on both the origin and epidemiology of the outbreak as well as the biological underpinnings of the disease making them excellent cases to determine methods to enhance urban resilience to epidemics.

2 The 2002–2003 SARS Outbreak: A Modern Urban Epidemic

The history of the 2002–2003 global outbreak of Severe Acute Respiratory Syndrome (SARS) provides key lessons on biological and policy factors that should be of general importance in designing resilient cities. We will summarize the history of the epidemic, with a focus on biological factors, before our discussion of disease models.

According to the World Health Organization (WHO), over 8000 worldwide SARS cases and over 770 deaths occurred in 30 different countries, mostly over a period of about four months (Kamps and Hoffmann 2003). The severe, “atypical” pneumonia originated in Guangdong Province in southern China in mid-November 2002. Most of the early cases appear to have occurred among those who kill and sell animals and meat as well as food preparers and servers (Breiman et al. 2003). By mid to late January 2003, the disease began to spread rapidly within the province, but a combination of symptoms difficult to distinguish from pneumonia (fever, dizziness, muscle soreness, coughing) and government policy to discourage coverage delayed the reporting of the epidemic until February 11. The initial communication reported 305 cases (including >100 healthcare workers) and 5 mortalities, but claimed that the epidemic was under control (Enserink 2013).

The role of “superspreaders” and amplification in hospitals remained characteristics of SARS as it spread to a worldwide epidemic. The first of several superspreading (generally defined as ten or more transmissions from a single infected individual) events occurred in Hong Kong on February 21, 2003 (Braden et al. 2013). The index case was a physician from Guangdong who stayed at the Hotel Metropole. The physician had treated SARS patients in Guangdong (although the disease was still unrecognized) and showed symptoms before his trip. He stayed only one night at the hotel before being hospitalized with severe symptoms but the short stay was sufficient to spread the infection to 13 or more of the guests from the same floor of the hotel as well as a Hong Kong resident who visited one of the guests. Eventually, over 4000 (almost half) of the documented 2003 SARS cases could be traced to this “index” case. Remarkably, there was no known direct contact in most of the transmissions among the hotel guests and visitors. The Hong Kong resident who visited a friend in the hotel subsequently infected over 140 others at the Prince of Wales Hospital in Hong Kong. Others were business/holiday travelers who spread the pathogen to Canada, Vietnam, and Singapore. As we will discuss below, this high transmission rate with little close contact in the Metropole Hotel remains mysterious.

Rapid recognition of a new epidemic was aided by a WHO disease expert, Carlo Urbani, who was asked to examine patients in a Hanoi hospital. The affected included one of the Metropole guests and roughly 20 hospital staff who became affected not long after his admission. Urbani recognized a severe, and possibly new, disease and warned WHO headquarters as well as the hospital and Vietnam government before contracting, and eventually dying from, the disease (Bourouiba et al. 2014). Response time is a critical parameter in epidemic control and his efforts played a large role in the effort to subdue the epidemic. WHO designated a new disease, “severe acute respiratory syndrome” (SARS), on March 10 and issued a global health alert on March 12 followed by an emergency travel advisory on March 15. The etiological agent of SARS was later discovered to be a novel coronavirus and was named SARS-associated coronavirus (SARS-CoV). This discovery, in late March 2003, came as a surprise to disease experts as previous human coronaviruses were only known to cause mild illness. In animals, related viruses were known to cause fatal respiratory as well as neurological diseases but coronaviruses are usually highly species-specific (Kamps and Hoffmann 2003).

Forensic analysis of the Metropole Hotel in late April 2003 revealed physical components of SARS in the common areas of the 9th floor including the corridor and elevator hall. However, no bacteria were found inside the guest rooms of the infected guests (the ventilation systems employed positive pressure within the guest rooms so that air was not shared among rooms). Respiratory droplets, or suspended small particle aerosols generated by the index case-patient, are the most likely transmission mechanism (Braden et al. 2013). SARS and other respiratory infections are considered to undergo short-range (approximately 1 m) transmission via pathogen-infected droplets from host coughing or sneezing. Such transmission requires “close contact”, physical proximity between infected and susceptible individuals who can be infected when large droplets spray enter their bodies via air or touch. However, minute droplets or even solid residues that can arise via evaporation (droplet nuclei) may allow potential indirect and/or long-range transmission (Bourouiba et al. 2014). For example, contaminated gas clouds that form during coughing/sneezing may have carried the pathogen and extended its transmission range, removing the distinction between droplet contact and airborne modes of transmission. Aerosol transmission probably caused high infection rates in an airline flight (Air China 112) from Hong Kong to Beijing in which a single 73-year old individual infected at least 20 others (Olsen et al. 2003). This feature of the disease may be highly relevant for medical and urban policy. Long-range aerosol/nuclei transmission does not require direct contact between infected and uninfected individuals and can greatly elevate the number of “contacts” for a given infected individual. Interestingly, genetic analysis showed that several SARS strains entered Hong Kong, but only the Hotel Metropole index case was associated with the subsequent global outbreak (Guan et al. 2004).

A related superspreading event occurred at a crowded high-rise residence, the Amoy Gardens, in Hong Kong. Many of the infected individuals inhabited vertically placed apartments (in contrast to transmissions on a common floor at the Metro Hotel case). Sanitary drainage fixtures that were malfunctioning and allowing air and SARS-contaminated aerosols to flow back into resident bathrooms may have been the main driver of infection spread in the condominium (Stein 2011). The superspreader was likely a medical patient undergoing treatment for a kidney problem including hemodialysis, a medical treatment that inhibits immune capacity (Stein 2011). The index case carried a high viral load and suffered from diarrhea. An important feature of this event was again, a lack of direct contact between the spreader and the individuals he infected, and the “opportunity” for the pathogen to be exposed to a large number of individuals through airborne transmission (Yu et al. 2004). At the Amoy Gardens, more than 300 individuals showed symptoms of SARS almost simultaneously.

High rates of hospital (nosocomial) transmission were an important and disturbing characteristic of the SARS outbreak. The large fraction of infections among healthcare workers probably reflects a combination of contact from respiratory secretions from patients who were at a highly contagious stage (critically ill individuals also were the most infectious) as well as from medical procedures that inadvertently generated aerosol contamination. A single patient appears to have transmitted infections to over 140 hospital staff in a span of two weeks at the Prince of Wales Hospital in Hong Kong (see below).

Two other superspreader events occurred in hospitals in other countries (Braden et al. 2013). One infected patient (the son of one of the Hotel Metropole guests) infected over 100 cases among patients, visitors, and healthcare workers at the Acute Care Hospital in Toronto, Canada. Finally, although Taiwan instituted strict port entry screening and isolation of potentially exposed travelers entering the country, there was an outbreak in the Ho Ping Hospital which spread into the community. In spite of a lock-down quarantine of over 1000 people in the hospital (included a large fraction of uninfected individuals), over 600 cases emerged before the outbreak was contained.

The initial rapid spread of SARS caused widespread concern and panic and the epidemic seemed unstoppable. However, the disease was eventually contained within several months through efforts coordinated by the WHO. Although advances in biomedical science and cooperative efforts among laboratories played key roles in isolating the infectious agent, “classic” epidemiological practices of patient isolation (separation of infected individuals from the general population), contact tracing, and large-scale quarantine (isolation of non-symptomatic individuals who have had contact with the infectious agent) were the main elements that halted the epidemic (Anderson et al. 2004).

2.1 Key Lessons from the SARS 2003 Crisis

The 2002–2003 SARS pandemic was caused by a moderately transmissible viral infection that produced 2.7 new cases for every infection (Riley et al. 2003) and yet it spread to over 30 countries across three continents potentially exposing tens of thousands of people in the span of only a few months. Several studies have shown that the vast majority of infected cases had very low infectivity and that a few outliers were responsible for a disproportionate number of new infections (Anderson et al. 2004; Riley et al. 2003; Lipsitch et al. 2003; Wong et al. 2004). In fact, Riley et al. (2003) and Lipsitch et al. (2003) found that early in the epidemic, an infected individual would only produce approximately three new infections when outliers are excluded. In Singapore, 81 % of the first 201 probable SARS cases showed no evidence of transmitting the infection yet 5 cases appeared to have transmitted the disease to 10 or more individuals (Lipsitch et al. 2003). Shen et al. (2004) found a similar pattern in Beijing where 66 out of the 77 confirmed cases did not infect others whereas four cases were responsible for infecting eight or more.

The rapid spreading of SARS despite only moderate average infectiousness has revived interest in the concept of superspreading events and heterogeneity in pathogen transmission. The transmission potential of an infectious disease is often described by the parameter R, the average number of new infections that infected cases produce over the course of their infection. R 0 is the transmission potential of an infected individual within an otherwise completely susceptible population (Dietz 1993). However, population-based summary statistics may obscure individual variation of infectiousness and other types of heterogeneities. Woolhouse et al. (1997) have shown that heterogeneities in infectiousness exist such that only 20 % of the host population contributes at least 80 % of a pathogen’s transmission potential. These individuals who significantly transmit more than the average are called superspreaders. In Hong Kong, apart from the incident at Hotel Metropole, at least two large clusters of infection were attributed to superspreading events (Riley et al. 2003). Data from the SARS pandemic showed the effect that superspreaders and superspreading events could have on the trajectory of the epidemic. Given their crucial role in intensifying an outbreak, we review the risk factors that facilitate superspreading events.

Co-infection and the presence of a comorbid disease could be risk factors for turning infected individuals into superspreaders (Stein 2011). Studies on HIV/AIDS transmission showed that co-infection with another sexually transmitted pathogen increased the urethral shedding of HIV in infected individuals. Moss et al. (1995) demonstrated that urethral HIV infection is associated with gonococcal infection and treatment for urethritis may reduce the risk of HIV transmission. In the case of SARS, Peiris et al. (2003) reported that other viral respiratory pathogens such as human metapneumovirus were detected in confirmed SARS cases. In addition, the index case in the Prince of Wales Hospital superspreading event was described to have a “runny nose” (Wong et al. 2004), an uncommon symptom for a lower-respiratory tract infection such as SARS. These observations have led to the hypothesis that co-infection or presence of a comorbid condition could endow an infected individual with characteristics or behaviors that increases their infectiousness (Bassetti et al. 2005). For example, rhinovirus, the major cause of common colds, can cause swelling of nasal tissues that can elevate airflow speed and contribute to aerosol production (Sherertz et al. 1996). Rhinovirus co-infection with more serious, but less transmissible respiratory ailments, such as SARS, could be an important factor contributing to high infectivity.

Environmental factors also play an important role in facilitating superspreading events (Stein 2011). In the SARS superspreading event at the Prince of Wales Hospital, the index case was placed on a nebulized bronchodilator four times daily for one week (Kamps and Hoffmann 2003). Nebulized bronchodilators are often used to deliver drugs to the lungs of respiratory patients but may have inadvertently aerosolized the virus and left infected droplets in the immediate surroundings leading to extensive dissemination of the pathogen (Tomlinson and Cockram 2003). Tracheal intubation, which involves placing a flexible tube into a patient’s windpipe to maintain an airway to deliver drugs, may also have inadvertently spread SARS within hospitals. Patients often emit respiratory secretions during the procedure.

An outdated ventilation system and overcrowding likely also contributed to the spreading of the virus at the Prince of Wales Hospital (Riley et al. 2003; Tomlinson and Cockram 2003). Through a case-control study of hospitals treating SARS patients, Yu et al. (2007) confirmed overcrowding as one of the general risk factors of hospital-based SARS superspreading events. The case-control study performed included 86 wards in 21 hospitals in Guangzhou and 38 wards in five hospitals in Hong Kong and showed that the main risk factors included closely arranged beds (less than 1 m apart), a workload of more than two patients per healthcare worker, hospital staff that continued working despite experiencing symptoms of the disease, and lack of washing or changing facilities for staff.

Despite the explosive growth and global distribution of the SARS outbreak, the pandemic was largely contained through isolation and quarantine, increasing social distance, and social behavioral adjustments (Bell and World Health Organization Working Group on Prevention of International and Community Transmission of SARS 2004). Isolation and quarantine were shown to significantly interrupt transmission of SARS in several countries including Hong Kong (Riley et al. 2003), China (Pang et al. 2003), Singapore (Lipsitch et al. 2003), Taiwan (Twu et al. 2003), and Canada (Svoboda et al. 2004). In general, symptomatic cases were immediately placed in isolation while contacts of confirmed infected cases were placed in some form of quarantine. In some cases, contacts were not immediately confined but instead were monitored for the disease and isolated only when symptoms emerged. Confinement was usually at home but designated facilities were available in countries like Taiwan (Twu et al. 2003). In some cases, individuals under quarantine were allowed to travel with the permission from the local health authorities provided they wore masks and refrained from using public transportation or visiting crowded places. To further reduce the chance of transmission, Hong Kong and Singapore also closed schools and public facilities, and canceled mass gatherings to “increase social distance”. People were also required to wear masks when using public transport, entering hospitals, or in jobs where interacting with numerous people is unavoidable such as in restaurants (Bell and World Health Organization Working Group on Prevention of International and Community Transmission of SARS 2004). The concerted effort has been marginally associated with the rapid reduction of new SARS cases in several countries. However, because of the simultaneous introduction of these measures, it is difficult to evaluate the effectiveness of each.

Several characteristics of the infectious agent were important factors in controlling the SARS epidemic. The incubation period from contact with the infectious agent to onset of symptoms was, on average, 4.5 days. Importantly, peak infectivity coincided with clinical symptoms and often required an additional 10 days or more (Anderson et al. 2004). Thus, infectious individuals tended to be hospitalized before peak transmissibility. In addition, the two-week interval from exposure to high infectivity gave epidemiologists critical time to perform contact tracing to identify and quarantine potentially infected individuals before they reached high infectivity. This feature, in combination with moderate transmission rates (except in special cases), contributed to making SARS a relatively controllable outbreak.

In the next section, we present current theories on the emergence and spreading of epidemics and review the theoretical underpinnings behind control measures used to contain outbreaks. We briefly highlight different mathematical models used to describe epidemic dynamics in populations. We explain the factors that govern the emergence and transmission of diseases as well as the evolution of pathogens that cause them. Finally, we examine how control measures such as isolation, quarantine, and vaccination mitigate the spread of infections.

3 Theoretical Models of Emerging Infectious Disease

Mathematical models have played an important role in our understanding of disease propagation. If biological factors can be accurately incorporated, such models may have predictive power to evaluate control strategies and guide policy. A key parameter in epidemic models is the total number of new infections that arise from a single affected host, the reproduction number, R. This value determines the outbreak potential of the infection; if R = 1, the infection will be maintained at a constant level (if we ignore random effects). R > 1 leads to disease spread and R < 1 predicts eventual extinction. However, R is not an intrinsic property of the pathogen. Variability of the reproductive number across pathogens, hosts, and environments over time must be understood to accurately model disease.

In the following three subsections, we discuss theoretical results on three important aspect of disease outbreak: (1) the effect of “superspreaders” on the probability of outbreak, (2) the impact of control strategies such as isolation and quarantine, and (3) factors that affect the evolution of pathogen virulence.

3.1 Superspreaders and Outbreaks

The 2002–2003 SARS epidemic was characterized by the large impact of “superspreaders” on disease propagation. In theoretical models, superspreaders can be treated as individuals with large number of connections to other individuals. Individual-based simulations incorporating network structures can efficiently address this topic and, in this subsection, we introduce three theoretical studies focused on the effect of network structure on disease outbreak.

Lipsitch et al. (2003) studied the effect of superspreaders on outbreak probability using the estimated parameters from the SARS outbreak in Singapore. The authors first estimated the distribution of the parameter R, which expresses the number of new infections from an infected host. Probabilities of outbreak (persistence of initially introduced pathogen lineages) were determined for R distributions with a fixed mean but differing in variance. The authors found that large variance in R distribution greatly decreased the probability of outbreak (Fig. 1). Contrary to the expectation of the importance of superspreaders, their result showed that distributions strongly clustered around the mean had higher probabilities of outbreak than distributions that included superspreaders (right-hand tail outliers). One reason of this apparent inconsistency might be the assumption of a fixed mean R. Under this assumption, increased variance in the R distribution increases both the numbers of individuals with extremely high R and low R. Individuals with low R are essentially “dead ends” in disease infection and high numbers of such individuals will decrease outbreak risk.

Fig. 1
figure 1

Theoretically estimated probability that a single introduced pathogen persists after infinite time under a Markov process with different mean (E) and variance (V) in the R distribution. In Lipsitch et al. (2003) this persistence probability is considered as probability of an outbreak. Modified from Lipsitch et al. (2003, Fig. 4A)

A similar result was obtained in Meyers et al. (2005). This study also focused on the case of SARS outbreak in Asian countries and used parameters estimated from the case study in individual-based simulations. Meyers and co-workers examined differences in the probability of outbreak among three different networks among individuals. In the first network, termed “urban”, many individuals have numerous contacts at public places including schools, hospitals, shopping centers and workplaces, and have more limited numbers of contacts at their home. The second network was a power law network, in which the distribution of the number of connections has a long right-hand tail. In such a distribution, a small fraction of people have large numbers of connections but most people have only a few connections. The third network was a Poisson network, in which the majority of the people have numbers of connections close to the mean number. If the existence of superspreaders increases the probability of outbreak, then power law networks should show the highest outbreak probability. However, similar to Lipsitch et al. (2003), power law networks showed the lowest probabilities of outbreak. The reason might be similar to what we discussed above; in a power law network, the numbers of individuals with extremely small numbers of connection are elevated compared with the other two networks. Pathogens cannot spread if they infect such individuals and will go extinct before they have a chance to infect superspreaders.

The two studies above indicated reduced probabilities of outbreak for populations that include superspreaders, but this conclusion may be strongly sensitive to model assumptions. Networks with more total connections (including superspreaders) may realistically model urban environments (this relaxes the assumption of constant mean connectedness). Fujie and Odagaki (2007) modeled superspreaders as individuals with higher infection rates (strong infectiousness model) or more connections including connections with distant individuals (hub model). They calculated the probability of outbreak under different fractions of superspreaders in a population and showed that, as the fraction of superspreaders increases, the probability of outbreak increases greatly (Fig. 2). They also analyzed several features of outbreaks like speed of disease spread and infection path between the two models and suggested that the hub model is consistent with data from the SARS outbreak in Singapore.

Fig. 2
figure 2

Theoretically estimated “percolation” probability of a single introduced pathogen under different fraction of superspreaders and population density in a hub model. In Fujie and Odagaki (2007) this percolation probability of percolation theory, in which a pathogen that has infected an individual in the bottom of 2 × 2 grid finally reaches an individual in the top of the grid, is considered the probability of an outbreak. As density becomes lower, distance between individuals becomes longer. The results for different fraction of superspreaders (λ) are shown in different markers. Modified from Fujie and Odagaki (2007, Fig. 4)

These contrasting results highlight the need to validate model assumptions for applications to human society. Higher outbreak probabilities with larger numbers of connections may seem obvious but this may be a realistic scenario for human society. A key issue is whether the number of connections of one person statistically affects that of others in human society. If not, the comparison between different fraction of number of superspreaders like Fujie and Odagaki (2007) would more realistically predict the effect of superspreaders on the probability of outbreak. However, if higher number of connections of one person necessitates reduced numbers for others, the results in Lipsitch et al. (2003) and Meyers et al. (2005) could be more applicable for human society. In either case, models should focus on both outbreak probability as well as the nature (explosiveness) of disease spread. Lloyd-Smith et al. (2005) demonstrated that many previous human epidemics appear to have spread through superspreaders (although not to the same extent as SARS). They showed that, although pathogen extinction probability increases with variance in reproductive number, populations with superspreaders experienced more rapid infection spread in cases of pathogen survival. Under their model, host populations may suffer greatly from improbable epidemics.

3.2 Control Measures

3.2.1 Infection Incubation and Infectivity

The first step to control the rise of any infectious disease is to understand how it transmits between hosts. Often, we imagine these infections as readily communicable illnesses that can be caught by even the most fleeting contact. But as we have shown, exposure and transmission depends on the route the infectious disease pathogen takes. This means that some diseases can be transmitted even without direct or close contact with an infected individual. We have also shown how particular conditions can make a close-range disease transmit over extended distances, as is the case with SARS transmission in the Amoy Gardens condominium complex. Aside from mode of transmission, the timing between infectiousness and showing symptoms of the disease is another crucial factor to consider.

An infectious disease is an illness caused by the presence of a pathogen within the host as well as the host’s response to the invading pathogen. Upon entry into the host, the pathogen begins to increase its numbers by redirecting resources to itself. After a certain time, its presence and the damage it has done to the host raises an internal host response to thwart the infection. It is at this stage of the infection that overt symptoms appear and the infection can be observed. The time elapsed between exposure to the pathogen and observing the initial signs and symptoms of the disease is called as the “incubation period” of the disease. The length of the incubation period varies among diseases and is affected by several factors such as dose and route of infection, and host susceptibility and ability to respond to the pathogen. Because of these considerations, incubation period is described as a range of values depicting how short or how long it takes before an infection would show symptoms. During this period, the infected individual may or may not be contagious depending on the type of disease and the individual’s health state. The disparity between the time we observe the symptoms of the infection and consider an individual ill and the time the individual is contagious are important aspects to consider in modeling as well as in prescribing infection control measures.

The timings vary widely depending on the infectious disease (Fig. 3). In the simplest scenario, the entire time an infected individual is contagious occurs after the first symptoms of the disease and ends well before the symptoms disappear. A completely overlapping timing where all symptomatic individuals are infectious would simplify identification and make control measures more effective. This timing pattern can be easily modeled by assuming that newly infected individuals simultaneously start to cause new infections to other individuals. And because the disease spreads specifically through a single class of individuals, control measures can simply identify symptomatic individuals to prevent new infections. In the case of SARS, peak infectiousness occurs 7–8 days following the onset of disease symptoms and correlates with viral load over the course of the infection (Anderson et al. 2004). Many believe this pattern helped contain the SARS pandemic (Chau and Yip 2003; Diamond 2003; Fraser et al. 2004) despite exponential growth of the epidemic that quickly spread to multiple continents.

Fig. 3
figure 3

Timelines for incubation and onset of symptoms for SARS, HIV, and influenza. For each disease, the expected timing when symptoms are observed is shown by the upper shaded region (blue) while the timing of infectiousness is indicated by the lower shaded region (orange). Marks indicate roughly when diagnosis and isolation of patients are likely to occur based on the onset of clinical symptoms. Modified from Anderson et al. (2004)

In contrast, diseases such as HIV/AIDS have completely different infectious and symptomatic periods. The first signs of AIDS do not appear until the infecting pathogen has significantly damaged the host yet the infected individual is contagious throughout the asymptomatic phase and peak infectivity occurs before the onset of symptoms (Fraser et al. 2004). Modeling diseases with disconnected infectious and symptomatic periods requires splitting the “infectious” class into “asymptomatic infectious” and “symptomatic infectious” classes to more accurately reflect the clinical characteristics of the disease.

Though SARS and HIV/AIDS have significantly different timing patterns, the relationship between peak infectivity and symptomatic period is clear. However, some diseases exhibit partially overlapping contagious and symptomatic periods that make their outbreaks more difficult to stop. Identifying the precise period that infected individuals are contagious is difficult because the values are affected by numerous factors such as susceptibility of the host, mechanism of infection, and immune response (Baron 1996). Individual variation in incubation periods further complicates the problem. In dealing with diseases that exhibit partially overlapping periods such as pandemic influenza, it is best to rely on conservative measures that consider both exposed and likely infected individuals as targets of containment measures.

Note that it is possible to harbor an infection yet not show any signs or symptoms of the disease. Called a “subclinical infection”, this asymptomatic state may be a result of the pathogen infection strategy and the host’s ability to tolerate an infection instead of purging it (Baron 1996). Asymptomatic cases that are infectious can help spread the contagion despite strict control measures by being misclassified as uninfected individuals. Asymptomatic cases are usually discovered by chance or by reviewing epidemiological data after an epidemic (Baron 1996). Modeling asymptomatic cases requires adding an “asymptomatic infectious” class that is capable of exposing and transmitting the disease. Containing the spread of an infectious disease suspected to have a high proportion of asymptomatic infected individuals is difficult but procedures such as contact tracing may reveal some of these asymptomatic carriers and quarantining of exposed and high-risk individuals can minimize their impact.

3.2.2 Isolation and Quarantine

Most emerging infections have no available vaccine or treatment. Thus the only way to control the spread of these diseases is to prevent exposure and further transmission. Isolation and quarantine are two control measures that help block transmission by isolating the individuals who have, or may have, the contagious disease. “Isolation” describes separating sick individuals (symptomatic) from people who are not sick (naïve) while quarantine pertains to the practice of separating and restricting the movement of asymptomatic individuals who may have been exposed to the disease to see whether they become sick. These control measures aim to progressively reduce the number of new secondary infections until the disease is eradicated from the population. Formally, we can measure the effect of isolation and quarantine by taking a survey of new infected cases and deriving the basic reproduction number R of the infectious disease for each step of the outbreak. Without any intervention, R is expected to eventually decrease as the number of susceptible individuals decreases in a finite population without migration. However, by the time the rate decreases to R < 1, a large proportion of the population has already been infected with the disease. By “removing” potentially infected individuals from the population, isolation and quarantine can more rapidly decrease R below 1 by reducing the incidence of the disease, leading to fewer new infected cases capable of transmitting the infection.

Isolating symptomatic individuals prevents new cases by separating individuals spreading the pathogen from the host population. Given a clearly defined set of symptoms to diagnose the disease, this strategy is intuitive and straightforward to implement from a public health point of view. A precise case definition also reduces misdiagnoses and prevents unnecessary isolation of non-target cases. However, many diseases share symptoms and may occur in combination with other infections so case definitions are not always precise. In the SARS epidemic, infected individuals showing atypical symptoms were a major source of transmission, partly because co-infection may have elevated transmission rates (Kamps and Hoffmann 2003). Modern biomedical research may serve to quickly identify new pathogens and providing diagnostic tests may be the most important function of initial research (vaccines and treatments generally require months or years and may not be helpful for new diseases).

Isolating symptomatic individuals is most effective if peak infectivity occurs after observing the first symptoms of the disease and transmission only occurs in symptomatic cases (Fraser et al. 2004). While diseases like SARS have shown such properties, other infections such as influenza appear to be transmissible even prior to showing overt symptoms. When peak infectivity occurs before the onset of symptoms, quarantine for symptomatic individuals may have little impact on dampening the spread of the infection (Fraser et al. 2004). Even for infectious diseases that transmit only after symptoms emerge, infected individuals may not immediately practice self-isolation or report to a healthcare facility. During the lag time between diagnosis and isolation, the pathogen can still spread to susceptible hosts undermining isolation as a way to control the spread of the infection.

On the other hand, quarantining individuals that have been exposed to the disease addresses the shortcomings of isolation as a control measure. Identifying exposure is dependent on how the pathogen spreads from one host to another. If the pathogen transmits via airborne droplets, then people present in the same room with an infected individual are considered “exposed”. However, if the pathogen spreads only through sexual contact, then only individuals who have had sexual relations with the infected case are considered exposed. When the transmission mechanism is unknown, scenarios such as airborne transmission or via physical contact that lead to the most conservative outcome may be used instead. Because the criteria to select individuals are independent of disease status, this strategy sacrifices sensitivity but works regardless of timing of infectivity and does not suffer from the lag time problem. Such a conservative strategy is well suited for emerging infections, especially when the mechanisms of transmission and pathogenesis have yet to be revealed.

In a perfect quarantine, all exposed individuals are expected to undergo quarantine regardless of whether they develop the disease or not, and during the quarantine period, exposed individuals do not transmit the disease. However, tracing all contacts is often problematic especially when an infected individual has traveled to numerous locations and when exposure occurred in public spaces and mass transit. Compared to isolation, quarantine sometimes faces more resistance from expected participants especially from those who have been exposed but appear to be in a healthy condition. During the SARS epidemic, mass quarantines were implemented in many countries. Over 130,000 potentially exposed individuals were quarantined in Taiwan, but in retrospect, the action may have spread panic among uninfected individuals and may not have been an effective strategy (University of Louisville School of Medicine 2003). In reality, quarantines are never perfect. Compliance to the procedures is often problematic: quarantined individuals do not reduce their geographical movement or they only abide by the procedure for a short period. Formal quarantines have good compliance rates but are costly and difficult to manage for a large number of cases. Therefore a majority of quarantines are made voluntarily or with less monitoring than formal quarantines, but these suffer from reduced compliance and are less effective overall. Knowledge about potential superspreaders to identify candidates for isolation can greatly enhance the efficacy of quarantines with much lower numbers of required isolations (Diamond 2003). Although such knowledge may be rare at the beginning of an epidemic, rapid epidemiological analyses may play a critical role in reducing the costs of epidemic control.

3.3 Evolution of Virulence

A critical aspect of human pathogens is their virulence or extent of damage to host. High virulence infectious disease such as HIV, plague or smallpox can be a great threat to human society, and the number of cases of pathogens that have been reported to have evolved virulence and/or resistance against drugs is alarming (Altizer et al. 2003; Holden et al. 2009). Understanding the factors which affect the evolution of virulence in human society is an important issue. If it is possible to control these factors in urban design, human society can be more resilient against serious disease outbreaks.

Several classic theoretical studies on the evolution of virulence concluded that reduced virulence is generally adaptive and should evolve among pathogens. Low virulence allows infected host individuals to survive, and pathogens can have more chance to spread to other host individuals. If pathogens have high virulence, they can propagate within an infected host individual, but risk killing the infected host and limiting their spread to other hosts. Trade-offs between reproduction within a host and transmission among hosts is a well-studied explanation for the evolution of reduced virulence (Anderson and May 1982; Alizon et al. 2009). However, the balance (or equilibrium) of this trade-off can differ depending on biological characters of pathogens. Ewald (1993) discussed how transmission mechanisms of pathogens can alter the predicted trajectory of virulence evolution. Highly virulent diseases tend to immobilize hosts in early stage of infection. Therefore, if pathogens are mainly transmitted by contacts between hosts, higher virulence would greatly decrease chances of new transmission. However, if pathogens can survive outside of the host and can be transmitted by air, water or vectors in which they are not virulent, host immobility should have less effect on the chances of new transmission. Ewald (1993) noted that such pathogens, such as smallpox, tuberculosis or diphtheria, are often more virulent than pathogens that depend more directly on hosts for transmission. Other factors can affect the balance of the trade-off and allow evolution of high virulence (Galvani 2003). For example, if multiple pathogen strains infect simultaneously and compete within individual hosts, high reproduction rate within a host (leading to high virulence) may be favored. In sexually-transmitted diseases, frequent exchange of sexual partners makes transmission of pathogens between hosts easier and as a result cause high virulence. This may be the case of HIV in human society (Lipsitch and Nowak 1995).

Host population structure also affects the transmission of pathogens and therefore, has a large impact on the evolution of virulence. Because urban planning and design can create or alter population structure by its use of the the environment, in the following paragraphs, we introduce two studies focused on the effect of host population structure on evolution of virulence. These studies are based on relatively simple models that may yield general insights. Boots and Sasaki (1999) incorporated a grid-like spatial structure of “sites” at which individuals can exist. Each site can have one of three states: empty, occupied by susceptible individual, or occupied by infected individual. In the spatial structure, connections between individuals were divided into two types, those between neighbors and those between randomly chosen individuals. Randomly chosen individuals can be in distant sites, and in such cases, pathogens can be transferred to distant locations. They found that pathogen virulence is favored as contact between hosts living in distant places becomes more common. In this model, a site becomes empty after death of an occupant. Therefore, higher virulence is more likely to create a situation in which pathogens kill all susceptible hosts around them and can no longer spread. However, long-distance transfer allows pathogens to spread to new locations where they are surrounded by susceptible hosts. Long distance transportation in human society allows contact between distant individuals and may be an important factor that facilitates the spread of outbreaks and favors pathogen virulence.

Boots and Sasaki (1999) did not consider host immunity in their model. Immune(infection-resistant) hosts can block pathogen spread and may have a large impact on the evolution of virulence. This question was theoretically addressed by the same authors. Boots et al. (2004) incorporated the immune state after the recovery assuming a negative correlation between recovery rate and virulence and found that evolutionary trajectories could lead to low, or even extremely high, virulence depending on host population density. In host populations with high density, pathogens can easily find susceptible hosts and therefore, low virulence which increases the opportunity of infection to a new host evolves. On the other hand, in host populations with low density, immune hosts around a newly infected host efficiently block pathogen spread. In this case, highly lethal pathogens which kill infected hosts and make open spaces can spread more efficiently compared with pathogens with low virulence which induce immunity in hosts. Even after killing some hosts, pathogens still have a chance to spread by infecting new susceptible hosts that emigrate to the open spaces. In Boots and Sasaki (1999), infected hosts are assumed to be susceptible just after they recover and therefore, lower virulence pathogens spread more efficiently. However, in Boots et al. (2004), immune hosts block pathogen spread and create scenarios where highly lethal pathogens evolve.

The results in Boots and Sasaki (1999) and Boots et al. (2004) reveal scenarios in which low virulence can evolve to higher virulence depending on the structure of host populations. A key point is that outcomes are sensitive to the scenarios of population structure, transmission mechanisms, and host immunity. Because of their short generation times and high genetic mutation rates, pathogens like RNA viruses may evolve rapidly, even over the course of an outbreak.

Influenza virus, Norovirus or Dengue virus are well known examples of RNA viruses that infect humans. Because these viruses cause epidemics every year, controlling their impact is a very important aspect of urban resilience. As mentioned above, the models in Boots and Sasaki (1999) and Boots et al. (2004) may be too simple to directly apply for particular diseases. Theoretical studies under more realistic conditions based on structures that closely resemble actual human society and biological characteristics relevant to particular pathogens will be valuable to prevent and control outbreaks of high virulent diseases. Important points to consider include parameters and assumption sensitivity for aspects of both host populations and pathogens. In addition, the definition of “connection” differs depending on the transmission mechanism of the pathogen. The concept of “network” must take the view of the pathogen and different networks may need to be considered for different diseases in the same human populations.

3.4 Emergence of New Epidemics

3.4.1 Source of New Human Pathogens

Many of the major human infectious diseases are zoonotic infections that have crossed over from animals into humans (Wolfe et al. 2007). Bubonic plague (Schmid et al. 2015), Influenza (Palese 2004), HIV (Gao et al. 1992), Ebola (Marí Saéz et al. 2015), SARS (Lau et al. 2005; Li et al. 2005b) and MERS (Memish et al. 2014; Wang et al. 2014) have all been shown to have originated from animals before infecting humans. Wolfe et al. (2007) surveyed 25 major infectious diseases ranked by highest mortality and/or morbidity to identify patterns in their animal origins and geographical spreading. All the diseases they surveyed appeared to have originated from the Old World (Africa, Asia, Europe) and a remarkable proportion of causative pathogens arose from warm-blooded vertebrates while the remaining were attributed to birds. Interestingly, the purported geographical origin of the disease was correlated with the type of animal to which the pathogen originally infected. For example, many diseases that trace back to tropical regions have come from wild non-human primates whereas diseases attributed to temperate regions often emerged from domestic animals. Although the exact reason for this pattern is unknown, Wolfe et al. (2007) suggested that, because livestock and pets were domesticated in the Old World, ancestral pathogens had more opportunity to infect humans compared to more recently domesticated New World animals. For the disparity between Old World and New World monkeys, they believe that closer genetic relatedness between human and Old World monkeys may have aided in cross-species transmission. These results stress the importance of considering both environmental and biological factors as key determinants of cross-species transmission of infectious diseases.

Recent spreading of human population by urbanization exposes us to novel pathogens that were previously isolated from human society. The risk of zoonotic infections may be increasing and it is notable that many novel pathogens appear to have high virulence in human (Reads 1994; Schrag and Wiener 1995). During their long evolutionary history, pathogens and their original hosts may have been recurrently co-evolving by which hosts evolve to be resistant against the pathogens, and pathogens evolve to evade the resistant system (Little 2002; Woolhouse et al. 2002). This means if hosts are exposed to a novel pathogen, it is highly possible that the hosts do not yet have immune resistance against the pathogen and are affected by high virulence (Longdon et al. 2015). There are also cases in which infections of novel pathogens cause inappropriate immune response and as a result, increase their virulence (Graham and Baric 2010). As introduced above, these highly virulent pathogens can spread in a host population depending on host spatial structure. However, it is important to note that not all novel pathogens have high virulence for human. Highly virulent pathogens are more likely to be detected and studied and therefore, the patterns may result from ascertainment bias (Alizon et al. 2009; Longdon et al. 2015). In any case, careful surveillance of both human and animal populations in regions of high human-animal contact may be an important component to defending against novel disease (Woolhouse et al. 2012).

Finding the original animal host of a new human pathogen requires scientific rigor but also guesswork and luck. The search for the animal reservoir of the SARS pathogen first identified the Himalayan palm civet (Paguma larvata) after SARS-like coronaviruses (SL-CoVs) were isolated from civets in live-animal markets in Guangdong, China (Guan et al. 2003). However, Tu et al. (2004) showed that while civets in live-animal markets were infected with SL-CoVs, civets on farms did not possess antibodies against the virus, which indicated that they have never been exposed to the pathogen. Moreover, palm civets infected with SARS-CoV showed signs of illness contrary to the expectation that animal reservoirs should be clinically asymptomatic (Calisher et al. 2006). This observation and that other animals in the same live-animal markets were also infected by the virus (Guan et al. 2003) indicated that the palm civets were infected in live-animal markets rather than being the ultimate source of the pathogen. Surveillance of wild animals in the region later lead to the serendipitous discovery that Chinese horseshoe bats (Rhinolophus sinicus) are the original animal host of the coronavirus that became SARS-CoV (Lau et al. 2005; Li et al. 2005b). The focus on bats may have been inspired by outbreaks of Nipah and Hendra virus a decade before that were also traced back to these mammals (Normile 2013). In addition, Li et al. (2005b) stated that the use of bat products in food and traditional medicine in southern China led them to investigate bats as a potential reservoir. Interestingly, bats appear to harbor many human pathogens and have been implicated as the animal reservoir of Nipah virus, Hendra virus, Ebola virus, and SARS-CoV. Even MERS-CoV, initially transmitted from camels, have been traced back to bat through phylogenetic analysis and biochemical studies (Wang et al. 2014; Yang et al. 2014). While SARS-CoV infection primarily affected the respiratory system, high concentrations of coronavirus were observed in bat feces and recovery from the small and large intestines indicate that replication is primarily through the excretory system (Drexler et al. 2014). Lau et al. (2005) speculated that the use of bats in traditional medicine, especially bat feces, may have played a crucial role in the cross-species transmission of the virus. Bat meat is also considered a delicacy and many Chinese believe it possess therapeutic activity, which led to bat trade in live-animal markets such as those in Guangdong, China.

3.4.2 Environmental Factors

Exposure between the pathogen reservoir and the new potential host species is a key factor dictating the probability of successful cross-species transmission. For example, HIV-1 and -2 seem to have transferred multiple times to humans since 1920 based on phylogenetic analysis, but only after 1970 was there a significant spreading of the infection (Heeney et al. 2006). One explanation suggests that the limited interactions between humans and primates created a barrier for the transference of the virus and insufficient interhuman encounters of infected cases delayed the rise of the epidemic (Parrish et al. 2008). To describe this phenomenon, let us model the underlying host contact network as a network of nodes (individuals) and connections (exposure). Assuming a heterogeneously connected network such as human social network and contact networks (Eubank et al. 2004), we find that the probability of a new infection becoming extinct by chance is very high both because the pathogen may be poorly adapted to transmit in the new host (Parrish et al. 2008; Daszak et al. 2000; Dobson and Foufopoulos 2001) and because cross-species transmission events tend to occur in sparsely connected rural areas (Tibayrenc 2011). The limited connections inhibit emergence of the disease and only the few that avoid stochastic extinction proceed to produce an epidemic in the host population (Lloyd-Smith et al. 2005; Eubank et al. 2004). This may explain why spillover events of animal infections, such as H5N1 avian influenza, fail to take hold in the human population despite the hundreds of human cases and deaths that have been reported (Parrish et al. 2008). While distribution is skewed towards fewer connections in these networks, it is still possible that the cross-species transmission event occurs at a highly connected portion of the network. Such an outcome will only make it more likely that the infection will take hold to produce an epidemic due to the presence of highly connected hubs that can spread the disease to a disproportionate number of hosts (Rock et al. 2014). Lloyd-Smith et al. (2005) expand this concept to show that any type of individual variance, for example infectiousness, produces the same effect. High individual variance increases the probability of extinction of an invading disease regardless of the strength of mean infectiousness. When the host population has a highly heterogeneously connected network, emergence of disease may be rare, but infections that survive stochastic extinction produce “explosive” epidemics similar to the case of SARS in 2002. These findings show that host population structure and demography significantly affects the probability of cross-species transmission as well as the subsequent epidemic that may follow.

3.4.3 Biological Factors

Host factors also play a significant role in determining the success of new infections in novel hosts, especially for viruses. To infect a host, a virus must be able to interact with the host’s cellular receptors to gain entry into cells and hijack the cell’s machinery to replicate itself. At the same time, the pathogen must survive against the host’s defense mechanisms. The initial interaction between the virus and host receptors is a critical step that determines host specificity and host range. For example, in SARS as well as in other coronaviruses, the viral structure responsible for viral entry is the spike glycoprotein, which also appears to be the key determinant of host specificity (Graham and Baric 2010). In humans, the receptor-binding domain on the spike glycoprotein interacts with a cell surface metalloproteinase called human angiotensin-converting enzyme (ACE) 2 to gain entry and infect lung epithelial cells (Li et al. 2003). However, Ren et al. (2008) showed that SL-CoVs found in bats do not interact with palm civet or human ACE2 receptors implying that changes must have occurred to gain this new interaction. In fact, there appears to be a sizeable difference between coronaviruses isolated from the putative bat reservoir and SARS. SL-CoVs from bats were found to be at most only 92 % similar compared to SARS-CoV (Li et al. 2005b). Later, Ge et al. (2013) were able to isolate and characterize a SL-CoV that utilizes the ACE2 for cell entry in bats, palm civets and humans. This finding argues that ACE2 utilization may have evolved prior to any cross-species transmission event.

While gaining the ability to bind to a novel receptor appears to be a complicated process, in some instances, even a few amino acid changes may confer the ability to recognize a new species. Initial studies comparing SARS-CoV isolated at different time points in the pandemic revealed the spike protein of viruses taken from palm civets and early human cases bind less efficiently than those from later on in the epidemic (Yang et al. 2005). Further genetic and biophysical studies demonstrated that two amino acid changes had an enormous effect on the binding affinity of the SARS spike protein to human lung epithelial cells (Li et al. 2005a, c; Qu et al. 2005). In most palm civet samples, lysine at position 479 and serine at position 487 of the spike protein were observed whereas asparagine and threonine were present in human samples. Li et al. (2005a) found that replacing lysine with asparagine removed the electrostatic interference with the histidine residue on the receptor while replacing serine with threonine provided a methyl group capable of filling in a hydrophobic pocket at the interface of the human ACE2 receptor. Although the structural changes appear to be subtle, these substitutions caused a thousand-fold increase in binding affinity to the human ACE2 and lead to enhanced human transmission. However, Li et al. (2005a) also found that some civet specimens have asparagine instead of lysine at position 479 yet this did not affect binding to the civet ACE2 receptor. Changes that are neutral to the original host but advantageous in the new host may have played a critical role in facilitating cross-species transmission between palm civets and humans.

Once a pathogen has evolved to reliably infect the new host’s cells, the innate immune response is the host’s first line of defense against the infection. When a virus successfully infects a cell, cytoplasmic enzymes that detect the production of double-stranded RNA, a hallmark of virus replication, activate the expression and release of interferons from the cell. Interferons act as an early-warning signal to other cells nearby by activating their intracellular antiviral response to combat viral infection and replication (Roy and Mocarski 2007). Because of the importance of this immune response against viral infection, many viruses have evolved features to subvert interferon signaling. For example, the influenza virus prevents the infected cell from detecting viral replication by using its NS1 protein to sequester double-stranded viral RNA (Lu et al. 1995). Another method to interfere with the innate immune response is to prevent interferons from activating antiviral mechanisms. Nipah virus produces two proteins that prevent STAT1 from translocating into the nucleus as well as another protein that sequesters STAT1 present in the nucleus, obstructing the activation of interferon-stimulated genes (Shaw et al. 2004). In the case of SARS, Kopecky-Bromberg et al. (2007) found that SARS-CoV nucleocapsid and accessory proteins inhibit both the expression of interferon and associated transcription factors, as well as inhibiting cellular response to interferon by subverting the JAK/STAT activation of intracellular antiviral mechanisms. While infection with SARS-CoV did not induce production of interferons, coinfection with another virus did produce interferons. It appears that SARS-CoV does not induce interferon expression, yet does not shut down the whole pathway as interferon signaling continues to work when other stimuli are present (Frieman et al. 2008). By antagonizing the induction and response to interferons, the pathogen blocks the activation of more than 300 interferon-stimulated genes which prevents the cell from going into an “antiviral state” (de Lang et al. 2009). Under the antiviral state, inhibitors are activated to prevent cell division, enzymes that digest proteins initiate programmed cell death, and proteins that present viral particles to activate the adaptive immune response are upregulated. Blocking interferon signaling causes a general decrease of both innate and adaptive immune system response, allowing SARS-CoV to infect cells unimpeded and potentially cause a more serious disease.

The rate at which a pathogen evolves is another biological factor that may determine risk of cross-species transmission. Most recent emerging infections have been caused by RNA viruses such as HIV (Gao et al. 1992), Ebola virus (Gire et al. 2014), Dengue virus (Gubler 1998), SARS-CoV (Lee et al. 2003) and MERS-CoV (de Groot et al. 2013). RNA viruses have an extremely high mutation rate because their RNA polymerase, the enzyme that copies their genome, lacks proofreading activity which leads to error-prone replication. Mutation rates for RNA viruses range from 10−6 to 10−4 substitutions per nucleotide per cell infection, two orders of magnitude higher, on average, than DNA viruses (Sanjuán et al. 2010). At those rates, about 1 out of 100,000 nucleotide changes every time an RNA virus replicates itself. This may not seem high but note that hundreds of millions of viral particles may be produced during a single infection (Haase 1994), which gives the virus numerous opportunities to explore potentially advantageous mutations. Although high mutation rates helps RNA viruses to rapidly adopt advantageous changes and alter phenotype, deleterious mutations are also produced at an elevated rate. Lauring et al. (2012) have demonstrated that viruses mitigate the effects of deleterious phenotypes by outcompeting and quickly purging these low-fitness variants. The ability of viruses to incorporate functional components made by other functional viruses within the same cell appears to also mitigate the negative effects of high mutation rates (Makino et al. 1988). Indeed, studies have shown that raising the mutation rate through mutagens can be used to create large numbers of dysfunctional mutants that rapidly leads to the extinction of the viral population (Pathak and Temin 1992; Loeb et al. 1999; Domingo 2000). Interestingly, in the case of SARS, the coronavirus that caused the disease did not have a very high mutation rate relative to other RNA viruses. Coronaviruses have the largest genomes (approximately 30,000 nucleotides) among RNA viruses and genome size and mutation rate appear to be negatively correlated (Sanjuán et al. 2010). One reason behind the relative stability of coronavirus genomes could be the presence of proofreading enzymes that guard against mutagenesis. In the case of SARS-CoV, Smith et al. (2013) showed that the exoribonuclease domain in non-structural protein 14 had proofreading activity and was responsible for protecting the viral genome against mutagenesis. Although high mutation rates appear to facilitate adaptation of RNA viruses to new environments, diseases of animal origin are not always caused by the fastest evolving RNA viruses.

4 Summary and Conclusions

The discussions above have introduced how several aspects of urban life, including high connectedness of individuals (including connections among distant individuals) and regions of high human/animal contact, are likely to elevate the risk of future epidemics. Because these properties may be intrinsic to urban life and difficult to alter or control, monitoring and preparedness are critical for urban resilience to disease outbreak.

4.1 Likelihood and Severity of Future Epidemics

Opportunities for the evolution of new or variant human pathogens are difficult to limit and may, in fact, be increasing in modern societies. Each contact between microbe and host can be considered a “trial” for a potential pathogen with random mutations in their genomes that may confer new functions or specificities. Thousands of such trials occur daily in many regions and are likely spawning candidate emerging pathogens with the ability to reproduce within humans and possibly also to transmit from person to person. The trajectory of pathogen evolution depends strongly on the numbers of contacts (potential transmissions) among individuals in the host population as well as on chance. The vast majority of potential new pathogens are likely to be lost early in their histories. However, given continuous opportunities, the chance event of pathogen emergence is simply a matter of time. In Guangdong province, several recent outbreaks of bird influenza (H7N9) have led to limits on live poultry markets, but consumer preference for freshly slaughtered poultry and wild animals remain an impediment to regulating the high-risk concentration of multiple species (pig, poultry, dog, cat, rabbit, as well as reptiles, fish and numerous wild game) in close contact with one another (and often in poor health) as well as with humans. Regions of recent human expansion where wild animal populations are in close proximity to high density human settlements must also be monitored carefully for new zoonotic diseases.

New strains of swine and bird influenza are currently monitored as candidates for outbreaks but pathogen emergence is unpredictable and may come from completely unexpected sources. Regardless of the source of new infectious agents, a major concern is that future pathogens may have properties that will make control much more difficult than SARS. Shorter incubation times and pre-symptomatic transmission strongly limit the efficacy of isolation and quarantine and may allow rapid disease spread.

4.2 Rapid Response

In this chapter, we have focused on biological factors that are central to disease emergence and control. Policy prescriptions have been discussed extensively (University of Louisville School of Medicine 2003; Beaglehole et al. 2003) and we highlight selected topics below. One of the important lessons from the SARS crisis was the need for a rapid and organized response, even in the case of a relatively controllable disease. Recognition of new epidemics through surveillance and global warnings and travel advisories are obvious critical factors but the necessary infrastructure has been difficult to implement, especially in developing regions.

Given the likely lag-time between the start of an outbreak and pathogen isolation and development of diagnostics, well-trained physicians and epidemiologists at the frontlines of the epidemic play a critical role in initial response. For an infected individual, the numbers of contacts and possible and actual transmissions increase rapidly with time so diagnosis and contact tracing are time-critical events. Finally, communicating with, and educating the public and controlling panic are major concerns especially in the context of false reports and rumors. Establishing trusted sources of information prior to emergencies should be a major objective for cities/regional governments. Issues with coordination among government agencies or between medical and government agencies were strong obstacles in the response to SARS in most affected regions (University of Louisville School of Medicine 2003).

4.3 Health Care

High transmission in medical care settings was one of the prominent features of both the SARS and MERS outbreaks. Because infected individuals with weakened immune systems or co-infections may be the most difficult to diagnose and may show high infectiousness, proper training in pathogen containment is a critical element of epidemic preparation. Similar basic techniques (proper use of gloves, gowns, masks, and goggles) were successful for SARS and Ebola suggesting that many practices will be of general value but specifics for particular transmission mechanisms (e.g. airborne versus vector transmission) are also critical. Intervals between outbreak occurrences may be large, so regular confirmation of preparedness is important. Low margins in health care are strongly linked to overcrowding and government and private organization incentives (e.g., increased funding for hospitals that rate highly on infection control training and preparedness) can greatly enhance hospital safety (Committee on the Future of Emergency Care in the United States Health System 2007). The importance of patient isolation in limiting disease spread is clear from recent SARS, Ebola and MERS outbreaks; hospitals must have containment facilities and “surge capacity” to limit superspreading events. Although public health measures were sufficient to eventually control SARS, additional measures including antiviral drugs and rapid vaccine development and production may be necessary for stronger pathogens. The economic impact of epidemics, roughly 200 billion USD (2 % of regional GDP) for East Asia from SARS and potentially over 800 billion USD for pandemic influenza (The Economist 2005) should help to justify the costs of outbreak preparation.

4.4 Interdisciplinary Research and Planning

Informed sanitation (water purification, sewage treatment) and building regulations/inspections (e.g. airflow control) policies can play a key role in preventing disease emergence and spread. The SARS example illustrates the need for extensive interdisciplinary efforts, combining expertise from physics (fluid mechanics), biology (especially understanding mechanisms of disease transmission), and building design for resilience to future outbreaks.

4.5 Personal Liberties and the Common Good

Isolation and quarantine were critical to controlling SARS in Hong Kong, Singapore, Taiwan, China, Vietnam and Canada. Compliance rates appeared to be high in all regions (University of Louisville School of Medicine 2003) perhaps partly because the “cultural” value placed on solidarity and cohesion was relatively high in these regions. More severe movement restrictions may be necessary for more transmissible and/or virulent pathogens. It is unclear whether similar measures can be employed with success in other regions where personal liberties are emphasized and/or government is less trusted. Biological studies can help to determine the necessity and guide planning for future epidemics, but social and economic issues may be the more critical limiting factors in developing preparedness for disease outbreaks. Understanding the social, psychological, and economic costs of previous and potential disease outbreaks among both citizens and government officials will be central to planning for resilient communities. The ability to overcome economic and psychological barriers (e.g., normalcy bias) to implementing such plans may require a fundamental transformation in human society.