Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

10.1 Introduction

Virtually all vaccines are prepared on a biological substrate and/or in a biological growth medium. The conditions used to prepare the vaccine are not only ideal for growth of the vaccine organism, but also capable of propagating adventitious (inadvertent) microbiological contaminants. Such contaminants may include bacteria , fungi , mycobacteria , mollicutes (mycoplasmas, spiroplasmas, acholeplasmas), agents of transmissible spongiform encephalopathies (e.g., bovine spongiform encephalopathy (BSE)), and viruses , including bacteriophage . Of particular difficulty for current detection methods is addressing the vast array of viral organisms, which vary considerably in terms of size, shape, content of lipid membranes (enveloped or nonenveloped), and content of nucleic acid (DNA, RNA, single-stranded, double-stranded, contiguous, or segmented genome); as well as varying in their sensitivity to inactivation procedures or efficiency of removal (by purification) procedures.

Testing methods for bacteria, fungi, mycobacteria, and mollicutes are fairly standardized. Harmonization or convergence of test methods is increasingly occurring across regulatory regions and pharmacopeia and will not be discussed further in this chapter, except in that some of the newer methods that will be discussed may be applied to their detection, as well as to the detection of viruses. It should be acknowledged that in regards to testing for mollicutes, newer methods (polymerase chain reaction-based, with or without biological amplification) have been developed and work remains in regards to harmonization of test methods acceptable in various regulatory regions.

Unfortunately, at present there is no standardized and validated test method to detect the agents of transmissible spongiform encephalopathies, like BSE , in biological products or the raw materials used in their preparation (e.g., bovine serum). Thus, strategies to control risk of product contamination entail implementing a number of product design elements, rather than testing for presence or absence. One strategy is to eliminate, to the extent possible, exposure to animal- and human-derived raw materials. This is not always feasible, and often cannot be implemented in regards to legacy products, without risk of altering in unknown ways a product of established safety and efficacy. Foremost among the possible strategies is controlling what materials to which the product is exposed by appropriate donor screening and geographic sourcing, as well as traceability and documentation, and in the case of BSE risk, eliminating high-risk specified risk materials from the collection process at the abattoir. Information on this topic may be obtained from http://www.fda.gov/BiologicsBloodVaccines/SafetyAvailability/ucm111476.htm and http://www.emea.europa.eu/docs/en_GB/document_library/Scientific_guideline/2009/09/WC500003700.pdf (EMA 2011). Taking another strategy, work by staff of the U.S. Food and Drug Administration (FDA) (Piccardo et al. 2011) has shown that many vaccine cell substrates did not propagate highly infectious BSE or variant Creutzfeld-Jacob disease agents.

The risk bacteriophage may pose to product recipients, if any, has been considered historically (in the 1970s and 1980s). While their presence may interfere with production of bacterial products, or products prepared from bacteria, their risk to humans has largely been dismissed. As such, their inadvertent presence, though not desirable, was codified as acceptable (21 CFR 630.18(a) and 630.60(c), last promulgated in 1996). Generally, the presence of bacteriophage is taken as a biomarker reflecting a previous level of bioburden in a material, which would have either been controlled (for nonsterile, nonparenteral products) or absent due to sterilization (for most raw materials and for sterile and/or parenteral products). As a consequence, testing is not generally performed to detect bacteriophage in vaccines derived from mammalian or avian cell culture.

Thus, the remainder of this chapter will focus on animal-viral testing, and the principles behind assessing viral risk and guiding testing strategies. Traditional or conventional methods, though not entirely harmonized across regulatory regions or among pharmacopeia, will be discussed in regards to the principles whereby detection is achieved. Further, newer methods, as yet not validated, standardized, or widely accepted by regulators, will also be discussed as it is anticipated that their utility will begin to be seen in short order.

Historically, bacterial vaccines were (and still are) generated by fermenting the bacterial vaccine organism and likewise, viral vaccines were and are generated by propagating the viral vaccine organism on a cell substrate, tissue taken from an animal, or in whole organisms (e.g., embryonated hens’ eggs). However, increasingly, viral vaccines may be generated from E. coli or from yeast; while bacterial and parasitic vaccines are being prepared in viral vectors grown on cell substrates. Thus, the lines between a “bacterial” product and a “viral” product have blurred. With this, so too have the types of microbial contaminants that may be of concern for a given product type. As a consequence, the focus of this chapter on biosafety from the perspective of animal viruses should not be taken to mean that it is solely applicable to viral vaccines.

10.2 Principles of Detection Methods

Tests for viruses currently in use include those used for clinical diagnostics in practice in the mid-twentieth century (in vivo and in vitro in tissue culture), as well as techniques derived in the latter part of the twentieth century [transmission electron microscopy (TEM), polymerase chain reaction (PCR), reverse transcriptase assays ]. The older methods are based on observations of responses to viral infection made in vivo or ex vivo, whereas the newer methods are based on some physicochemical aspects of viruses. These principles will be elaborated below.

10.2.1 Detection by Physicochemical Properties of Viruses

Viruses display a wide variety of physicochemical properties. Viruses come in a variety of sizes (from ~20 to 100s of nm) and shapes (icosahedral, spherical, bullet-shaped, filamentous, oblong; as well as regular in shape or irregular in shape). Some contain long spikes protruding from the capsid. Some viruses contain lipid membranes (enveloped) and some do not. Further, their nucleic acid content varies in every imaginable form. They may have dsDNA, ssDNA, dsRNA, ssRNA of the “positive” (in the same sense as mRNA) or “negative” (complementary to and requiring transcription to the sense of mRNA) sense, and genomes that encompass both RNA and DNA. Some viruses have genomes that are contiguous and some are segmented. Retroviruses have two copies of the same genome packaged into a single capsid, thus are effectively diploid, while most viruses are “haploid.” This wide variety makes detection of viruses complicated, and no one traditional method readily detects all types.

10.2.1.1 Detection of Viral Structures

Although viruses come in a variety of sizes and shapes, their structures can be used to identify them. While it takes experience to recognize the difference between normal cellular structures and those of some viruses, experienced transmission electron microscopists can distinguish between them and thus, identify the presence of viruses. There are limitations to TEM however. It is neither a sensitive nor a specific method. While the latter is to benefit in that a variety of micro-organisms, including viruses, may be detected by this method, nonspecific structures may be mistaken for viruses and viruses can be missed. Furthermore, the sensitivity in terms of limit of detection for this method requires a concentration of virus on the order of 105 or 106 particles/mL. Nonetheless, viral structures can be diagnostic if seen on TEM . So, the look of a virus’ structure by TEM is one means that viruses may be detected. TEM applications for virus detection were reviewed by Roingeard (2008).

10.2.1.2 Detection of Viral Proteins

Viral proteins have some unique characteristics that permit their recognition in detection assays. One such feature is the ability of certain viral proteins to bind to and agglutinate red blood cells. Many viruses (e.g., orthomyxoviruses and paramyxoviruses) have a viral protein termed hemagglutinin, with exactly this property. Other features entail the unique biological activity of a viral protein, e.g., the ability of the polymerase of retroviruses to reverse transcribe RNA into DNA. Finally, the ability of antibodies to recognize epitopes on viral proteins may be used in immunofluorescent assays (IFA).

10.2.1.2.1 Hemagglutination and Hemadsorption

Hemagglutination of red blood cells (RBCs) by infectious organisms was a characteristic discovered as early as the nineteenth century. Thus, as viruses began to be manipulated and studied in the twentieth century, this characteristic was used as a means of assessing for viral infection. The ability of some viruses to adsorb to RBCs, causing clumping or agglutination, was noted in early explorations of viruses. This ability was exploited to develop an early clinical diagnostic test for viruses. These tests, either the hemagglutination of RBCs by supernatants, sera, or plasma containing viruses or the ability of infected cells in culture to hemadsorb RBCs (Shelokov et al. 1958), permit a visualization of viruses by cross-linking viruses and RBCs together in a large enough clump to be observed by the naked eye (hemagglutination) or by light microscopy (hemadsorption).

However, not all viruses contain a protein that can hemagglutinate RBCs. As a consequence, this diagnostic parameter is only useful to detect some viruses. Nonetheless, it is a broad general screen requiring little knowledge of the type of virus for which one is looking. Like TEM , it is not specific (bacteria can also hemagglutinate) and in order to visualize the process, a sufficient quantity of virus, viral proteins, or viral particles on the surface of an infected cell must be present to bring together an adequate number of RBCs to clump or hemadsorb and be recognized above background levels. Another caveat is that different viruses will hemagglutinate certain species’ RBCs but not others, so if the wrong species’ RBCs are used, the hemagglutinating effect may be missed. An early study characterized the log10 ratio of infectious influenza virus per hemagglutinating dose to be ~4–6 (in other words, 104–106 ID50 per hemagglutinating dose; Donald and Isaacs 1954).

10.2.1.2.2 Viral Enzymes (Reverse Transcriptase)

Retroviruses contain a RNA genome, and have an obligatory step in the replicative life cycle in which their viral RNA genome is reverse transcribed into cDNA that integrates into a host cell as a provirus. It is from this provirus that viral mRNA transcripts are generated. Thus, retroviruses must package within their viral capsid a sufficient number of molecules of reverse transcriptase to ensure that this step of the life cycle is completed. Assays are available for detecting this enzymatic activity by monitoring for cDNA molecules thus reverse transcribed. The conventional or traditional method entails detection of incorporation of radioactively labeled deoxyribonucleotides into the nascent reverse transcribed DNA strand, although newer versions of this traditional method utilize nonradioactive labels.

A newer method was developed in the 1990s and referred to frequently as PERT or product-enhanced RT assay, a term coined by the authors of an early publication on the method (Boni et al. 1996). This method entails the use of PCR to amplify the signal of the reverse transcribed DNA. In the absence of RT enzyme, the RNA template will not be reverse transcribed to DNA and thus, no PCR amplification will occur. But, in its presence, amplicons are generated. This enhances the sensitivity of the method to detect RT by about six orders of magnitude over the conventional method. However, specificity is lost in that host DNA polymerases and authentic reverse transcriptases from host retroelements, present in all eukaryotic species, can also result in a signal in the PERT assay. Means to reduce this background in the assay have been implemented, but it remains a potential area of concern that challenges the conduct and the interpretation of the assay. Also, like all PCR s, test articles may contain nonspecific inhibitors that reduce the sensitivity of the method because they require dilution of the test article to permit the reaction to occur without inhibition.

Nonetheless, detecting this enzymatic activity of a viral RNA-directed DNA polymerase, distinguishing it from DNA-directed DNA polymerases, is a property that is used to detect retroviruses from all species.

Potentially, other enzymatic activities specific to viruses might be exploited for viral detection, but at present, such assays are neither in routine use nor in advanced development for that purpose, to our knowledge.

10.2.1.2.3 Binding of Antibodies

Intact virions present epitopes that can be recognized by binding of antibodies from specific hyperimmune sera or monoclonals. Infected cells may also present viral proteins on their surface permitting antibody binding. These antibodies can be fluorescently labeled in order to perform an IFA . Alternatively, supernatant fluids, which may be contaminated with virus, can be assessed by application to an enzyme-linked immunosorbent assay (ELISA), although we are unaware that this is commonly used in direct adventitious agent testing of biologicals, even though it is commonly used in research settings and in diagnostics.

10.2.1.3 Viral Nucleic Acids

Before the advent of PCR , viral nucleic acids were detected by hybridization methods, such as Southern or Northern blotting or slot/dot blots. However, most currently used routine tests for viral nucleic acids are based on PCR . Frequently, these newer methods are referred to as nucleic acid tests (NAT). These may entail visualization of PCR amplicons by gel or Southern blotting or may entail use of more quantitative methods such as real-time PCR or Q-PCR.

For DNA viruses, direct PCR with either specific primers, conserved primers, or degenerate primers may be performed. Such tests are usually performed on extracted nucleic acids from cells that may potentially be infected, although they may also be performed on nucleic acids extracted from culture fluids or other aqueous solutions. Also, mRNAs of viruses can be detected, to suggest potential replication of a virus by expression of viral transcripts, though this is not commonly used as an adventitious agent test method. For RNA viruses, RT-PCR, wherein the initial step entails reverse transcription followed by PCR amplification from the generated DNA template, may be performed.

All of these methods rely on a high degree of sequence similarity between the probes, primers, and the viral species targeted for detection. This requires considerable knowledge about what viruses for which one should be testing, as well as sequence similarity or divergence among known viruses. Degenerate primers can be used to minimize the knowledge needed to detect a signal, but these are not currently in widespread use in the testing field, in part because of the enhanced specificity of specific primers .

10.2.2 Response of Cells or Organisms to Virus

Some viruses cause apparent infection in cells in culture and can be detected in vitro by these means. Such viruses are noted by the cytopathic effects (CPE) they cause on a culture monolayer.

Some viruses do not cause apparent infection in culture. These so-called “inapparent viruses” may be detectable by the responses of living organisms to infection. They may result in death of animals or eggs, or morbidity, which is detectable by notable signs that may be monitored.

Similarly, animals may respond to infection by mounting detectable antibody responses and this capability is exploited in the so-called mouse, rat, or hamster antibody production assays (MAP, RAP, HAP). The antibodies generated in the relevant species are detected by IFA or ELISA methods.

10.2.2.1 Tests in Cell Cultures

The ability to successfully propagate cells in culture from explanted tissues revolutionized the ability of viruses to be propagated and researched in the twentieth century. This platform has proven to be useful for viral detection as well.

As discussed above in Sect. 10.2.1.2 and subsections, some readouts for detecting viral infection in cell culture include IFA and hemagglutination/hemadsorption. In addition, other read-outs are discussed below.

10.2.2.1.1 Cytopathic Effects

Once it was discovered that viruses could be propagated in explants of tissues maintained in vitro, observations were made of the CPE the viral infections caused. CPE occurs as a result of the killing of cells in a zone where viruses may have propagated from cell to cell (plaques ) or the fusion (syncytia formation ) of the plasma membranes of multiple infected cells. These plaques or zones of dead cells or areas of overly large cell syncytia are readily noted by light microscopy, or even by the naked eye. The spread of any viral infection across the cell sheet may depend on the nature of the virus and the infected cell’s response (for instance, highly cell-associated versus secreted or released by cell lysis). Semisolid overlays, commonly used to restrict virus diffusion in some plaque assays, are not typically used in adventitious agent tests, because maximal opportunity for virus propagation aids detection.

10.2.2.1.2 Transformation

Transformation of cells in culture is associated with changes in cell morphology, loss of contact inhibition (the process whereby cells stop propagating when they contact too many other cells in their vicinity), and sometimes ability to grow in suspension rather than as monolayers attached to a substrate.

Although not routinely used as a viral detection test for vaccines, some viruses have the ability to transform cells in vitro. This ability derives largely from binding of viral proteins to host proteins that control cell cycling, such as p53 or RB, thus disrupting their functioning and causing the cell to lose control and grow indefinitely. Alternatively, this ability derives from being able to complement missing function in specific cell lines prepared with defective sarcomavirus (S+ L−). The result of viral transformation in culture is often seen as a focus of cells piling up without the usual contact inhibition that preserves a uniform monolayer.

Some of these viruses are also oncogenic in vivo. While an in vitro transformation assay has been used as a surrogate marker for tumorigenicity of intact cells, this is not commonly used as an adventitious agent test. However, in vitro transformation and in vivo tumorigenicity do not always correlate, so this in vitro surrogate has largely been abandoned in the field of testing of biologicals as unreliable for that purpose. Nonetheless, the ability of a virus to transform cells in culture, particularly human cells, would be concerning to regulators and may be a biological parameter warranting further research should the phenomenon be observed in the production cell line or an indicator cell line used in adventitious virus testing.

10.2.2.2 Tests in Animals

Before cell culture, viruses were propagated from animal to animal in order to study them in the experimental setting. Clinical observations of the effects viruses have on experimental animals formed the basis for early clinical diagnostics for viruses. These tests remain in use for adventitious agent testing because of historical utility, but public reports from modern testing service providers have called into question their actual utility in the era of current Good Manufacturing Practices and newer testing methodologies. Recent work has suggested that their sensitivity for detection of viruses may not be as good as previously believed (Gombold et al. 2014), despite having been relied upon for years. The benefit of such tests is that they require no prior knowledge about what virus or organism may be present in a test article nor do they require the ability of the virus to adapt to or be able to grow in cell culture.

The MAP , RAP , and HAP assays, described in Sect. 10.2.2 above, may be replaced by specific PCR tests and these replacement tests have begun to be accepted by regulators on the basis of demonstration of comparable sensitivity of detection.

10.2.2.2.1 Pathological Readouts

Generally, the inapparent viruses tests are performed in adult (postweaning, generally 3–4 weeks of age or older, but of a certain weight restriction that ensures they are young animals) and suckling (newborn) mice and in embryonated chicken eggs. In addition, the European Pharmacopeia requires testing viral vaccine seed lots in guinea pigs. This test is recommended in other regulatory regions in certain cases (to detect Mycobacterium sp., lymphocytic choriomeningitis, or Marburg virus). Finally, in certain specific cases (to detect simian Herpes B virus), rabbits might also be recommended.

An obvious sign of infection in animals or hens’ eggs is death. In fact, this is the most obvious readout of the in vivo tests that are routinely employed, which require that 80 % or more of inoculated animals or eggs survive the test.

However, other pathological signs or findings may also indicate a viral infection. While animals cannot be queried like humans can about symptoms, they can be observed for various signs. Weight loss is frequently observed in infections from which the animals recover and survive. Signs of illness may include ruffled fur or hunched posture, due to lack of normal grooming behavior or normal mobility from the animal feeling unwell. Other behavioral signs may suggest neurological impairment, particularly a sign like hind limb paralysis and its resultant impact on mobility of the animal. In the context of hens’ eggs, behavioral signs are unobservable, but pocking of the chorioallantoic membrane can be indicative of viral infection. The ability of the allantoic fluids to hemagglutinate is also indicative of viral infection.

Although not frequently used in adventitious agent testing, fever can be another clinical sign suggestive of infection. This sign is used in the rabbit pyrogenicity test for bacterial endotoxins, but not generally measured in viral tests. But, fever may contribute to feelings of malaise that may manifest in observable behavior and can be indirectly monitored in this fashion.

10.2.2.2.2 Tumor Formation

Although not used routinely for adventitious virus testing, in certain cases, regulators have asked for novel cell substrates to be assessed for viruses that might be oncogenic in vivo. Such substrates would generally be restricted to those that are derived from tumors or have been shown to be tumorigenic themselves in animals, causing the regulators to question what the cause of the tumorigenic profile of the cells may be. Could the tumor have arisen from an oncogenic virus infection? In assessing for oncogenic viruses, generally the regulators would ask for in vivo testing of cell lysates (lysed to release the purported virus) or cellular nucleic acids (to detect infectious viral genomes of oncogenic viruses). Unfortunately, these tests are currently neither validated, nor controlled to demonstrate validity. However, work has been published on a sensitive model and a positive control for assessing cellular DNA oncogenicity (Sheng-Fowler et al. 2010; Sheng et al. 2008), and discussed in September 2012 at a Vaccines and Related Biological Products Advisory Committee (FDA/CBER/OVRR 2012a, b). As a consequence of the lack of validation, these tests are not routinely employed.

Nonetheless, the concept is the following: were infectious oncogenic viruses or viral nucleic acids present in a cell substrate, they could cause the animals to develop tumors either at the site of injection or at remote sites where the virus may have circulated upon infecting the test animals. These tests require monitoring the animals for much longer periods of time than routine adventitious agent tests and to palpate the animals to detect nodule formation. Histopathology of animals that develop nodules on study and of those that do not by the time of the study endpoint may permit detection of occult lesions or metastases. Improved animal models and the availability of a positive control that will not infect animals and contaminate an animal facility may see such methods increase in use in future. However, the value of such a test in comparison to other novel methods that are emerging will need to be considered prior to routinely implementing a product safety test that requires the use of animals given the current climate in which the reduction, refinement, and replacement (3 Rs ) of use of animals in product safety testing is being staunchly advocated.

10.2.2.2.3 Antibody Production

Animals inoculated with a specimen contaminated with viruses for which that species is susceptible may mount an immune response to that virus contaminant. These antibodies may be detected by use of an IFA or ELISA assay. In this way, specific viruses can be sought that are relevant to a particular species, e.g., hamster viruses that might contaminate Chinese Hamster Ovary (CHO) cells, which could be used to produce recombinant subunit vaccines. This is the basis for the HAP , RAP , and MAP tests, described in Sect. 10.2.2 above. Because these tests are for specific viruses, they can be replaced with other specific methods, e.g., PCR . Regulatory acceptance of these alternative tests is emerging, e.g., by the Office of Vaccines Research and Review at FDA. Acceptance of alternative methods is dependent on demonstration that they are equivalent or better than the traditional methods. A focus on relative sensitivity (LOD) and specificity (e.g., lack of interference of test sample matrices) are key to international convergence for replacing these animal-based assay methods.

10.2.3 Challenges with Currently Routine Tests

There are a number of challenges and difficulties faced when employing the currently routine suite of viral tests.

Like all assays, false positives may result, leading to investigations and decision-making processes about whether a re-test is appropriate.

In the tissue culture tests, it is not infrequent that apparent positives can be linked to cross-contamination from the assay positive control virus. The animal-derived serum used in the culture media can also, sometimes, be a source of contamination of the assay, leading to a false positive for the specimen tested. Sometimes, cell monolayers do not maintain well over a 2-week interval required, leading to inability to assess them adequately for viral CPE or giving a false impression of viral CPE . Test articles can be cytotoxic, also interfering with the test and giving inconclusive results. Occasionally, a “bad” lot of RBCs will result in very high background levels in the hemadsorption portion of the test, which appear positive or inconclusive.

PCR tests, being so highly sensitive, are also subject to a false positive rate. Test articles frequently interfere with the PCR reaction, requiring dilution of the test article to overcome the inhibition, thus reducing assay sensitivity . And as previously discussed, PERT assays can be subject to false positives or high background from cellular DNA-directed DNA polymerases or authentic RT expressed from endogenous, noninfectious retroelements.

The in vivo assays are also fraught with challenges. A poor or inexperienced dam may not suckle or tend her newborns properly, leading to a loss of a part or all of a litter. Or the pups that die may be cannibalized by the dam, precluding investigation into the cause of their deaths. Eggs can become bacterially infected and die from this, having nothing to do with the test article being contaminated. Or the test article may be toxic to the eggs. Even the adult mice can occasionally spontaneously die from unclear causes. If housed together, sometimes they fight and one may die from this pestering. All of these events can cause the appearance of a false positive or invalidate the test, leading to re-tests and loss of confidence in the results.

Currently, the test methods in different regulatory regions are not completely harmonized. The volumes, the routes of inoculation, the age of egg embryos at inoculation, length of incubation, and other differences exist. Also, due to lack of specificity or clarity in the various requirements and guidances, differences in the tissue culture tests also exist. The impact of these differences on the sensitivity and specificity of the methods is unknown, because these tests have not been validated as newer methods are required to be. They are considered compendial and need only be verified. Thus, the true performance parameters of the methods are relatively unknown. Some work (Gombold et al. 2014) has been done to address this problem, but it is only a beginning and does not provide comparisons between various compendia and regulations, having only followed the US methods.

Other specific challenges are discussed below in more detail.

10.2.3.1 Neutralizing Antisera

It may be necessary to neutralize the vaccine virus in order to perform the test for viral adventitious agents in the panel of indicator cell cultures or in vivo systems, because the vaccine virus might replicate in the test system or may just lead to a cytotoxic defense response, in the case of some replication-defective viral vectors. Although this issue may be addressed during vaccine production by the use of control cells, the need to test the viral seeds or pre-seeds to demonstrate that the input material into production is free from adventitious agents makes this issue problematic to address. The cytotoxic response that may be caused by inadequately neutralized vaccine virus might lead to complete cell death or a subpopulation of cells might recover after some period. In the latter case, a judgment needs to be made as to whether the test could be considered valid if a large proportion of the inoculated cells died. Incubation of the vaccine virus harvest sample or seed with neutralizing antisera raised in animals or specific neutralizing monoclonal antibodies can alleviate the potential for viral replication, and might be able to alleviate cytotoxic response.

Neutralization can be challenging however. The antisera used must be “completely” neutralizing, because any vaccine virus not neutralized may break-through, infecting the test system and causing positive results in the assay. Completely neutralizing antisera cannot always be raised against some viruses. For example, pox viruses are very difficult to neutralize completely and this presents problems for testing for adventitious agents of vectored vaccines based on pox viruses. Testing parallel control cells addresses some concerns, for adventitious agents that may have arisen from the production cells, the culture media components, or the production process itself (equipment, environment, personnel). However, this does not address adventitious agents that may have arisen from the viral seed or the species from which the isolate was derived, if not molecularly derived or cloned.

The species in which the neutralizing antisera are raised should not be susceptible to viral infections that may be adventitious in the production system, or other antibodies not specific to the vaccine virus may be present in the antisera and risk neutralizing the adventitious viruses one is trying to detect. Determining that interfering antibodies are present in antisera is not technically feasible, as one cannot know all the adventitious agents against which one might need to screen the antisera.

Consequently, there is an advantage to using monoclonal antibodies or raising antisera in Specific Pathogen Free (SPF) animals. SPF does not mean being free of all pathogens nor of nonpathogenic (in that species) viruses, but only means being free of specific pathogens, as the term implies. Also, the degree of “SPF -ness” can vary, in that one can have differing numbers of pathogens from which a herd, flock, or colony of animals must be free. So, for different purposes, one may have a list of, e.g., 10 pathogens, 15 pathogens, or 25 pathogens, for which a herd, flock, or colony is monitored. All of these would be considered SPF , but some would be “more SPF ” than others. Also, the list of specific pathogens may not be harmonized across regulatory regions because of different viruses endemic in various regions (which presumes the tests are being performed in the same region as the regulators who are reviewing the test data, which is not always the case).

Other concerns with use of a neutralizing monoclonal antibody or antiserum include the small dilution of the test sample, thus slightly reducing the sensitivity of the test. Also, there may be potential toxicity for the test system. In the latter case, reducing the antiserum concentration to nontoxic levels, while maintaining sufficient levels of neutralization to prevent break-through, may be a fine line that might not be reliably achievable.

All of these issues must be borne in mind when testing viral seeds and vaccine harvest material for adventitious agents in the test systems of living organisms and cell cultures. Each can complicate the reliability of test performance, or in some cases, even preclude it.

10.2.3.2 Dose Equivalents and Test Samples/Volumes

Unlike the specificity of methods for sterility tests, the various compendia and regulations have not always been clear regarding the amount of test article that should be applied to the test system. Different testing service providers apply differing amounts and even between clients, they may receive different concentrations and volumes of test sample. While the volumes for the in vivo tests are specified, the concentrations of the material applied are not always clear. In FDA regulations that were revoked in 1997 as being obsolete, restrictive, duplicative, or unnecessary,Footnote 1 the test methods were described in the context of testing of viral harvests of specific vaccines and so dose equivalents were given in terms of viral titers that would reflect a final container dose for that vaccine. The origins of the specific guidelines are very likely rooted in what was considered practical at the time the regulations were first promulgated. Cornfield et al. (1956) described the application of 500 dose equivalents for detecting residual infectious poliovirus after inactivation in order to provide a 1/100,000 chance (after multiple tests at different process stages) that any given dose might contain an infectious unit of poliovirus. Likewise, 500 dose equivalents (or minimum volume of 50 mL, whichever was greater) was promulgated in the 21 CFR-mandated testing for measles, mumps, and rubella vaccines in the 1960s, although the concept was entirely different since these were live vaccines, and the tests were for adventitious agents rather than residual live vaccine virus following inactivation. The similar figure of 500 dose equivalents was not statistically derived, nor is there literature describing the probability of detection of an adventitious agent in a dose of vaccine, as Cornfield et al. did for inactivation of poliovirus. Similar to the revoked U.S. regulations, the European Pharmacopoeia (EP) section 2.6.16 (EP 2014a) specifies testing the greater of 50 mL or 500 dose equivalents for both virus seeds and harvests. However, for testing cells, cell lysates, spent culture fluids, or viral seeds (in the case of the U.S., although this is addressed in the EP as stated above), there was no clear guidance. The FDA guidance document that was finalized and published in 2010, “Guidance for Industry Characterization and Qualification of Cell Substrates and Other Biological Materials Used in the Production of Viral Vaccines for Infectious Disease Indications,” made an effort to add clarity on the recommended amounts to test for these samples (FDA/CBER/OVRR 2010).

The recommended inoculation routes and volumes do not necessarily assure a consistent sensitivity for all potential adventitious agents that might be detected by a given method, and actually reflect practical capabilities scaled linearly by replicating flasks or animals. For instance, intracranial inoculation of 0.01 mL of a culture fluid into each of 20 suckling mice is unlikely to yield a comparable volumetric sensitivity for the neurotropic virus LCMV than testing 0.5 mL of the same fluids in each of 10 or 20 eggs does for an influenza virus. Similarly, testing 100 dose equivalents in eggs is not the same sensitivity as 500 dose equivalents in cell cultures. Testing 107 cell equivalents as part of cell substrate characterization in animal and cell culture systems is unlikely to represent the same sensitivity as testing the maximum nucleic acid load per well in a PCR assay for a specific virus (typically ~0.5 ug, representing roughly 105 cell equivalents). Consequently, even among the existing routine tests, the same level of sensitivity is not expected for each method and the sample volumes tested reflect the practical needs of the method rather than a statistically determined sample, which would be the ideal approach.

Finally, lack of contamination cannot be completely assured unless the entire batch or lot is tested in all suitable assays—an obvious impossibility. In fact, all sampling strategies are a trade-off between how much material can be sampled and the concentration of the contaminant that can be detected with that amount of sample. Even the strategies described above do not assure absence of potential contaminants or risk, but rather provide a level of assurance that, in the context of validated or qualified manufacturing processes, there has not been a catastrophic failure in the system. This is analogous to the relatively small volumes of material tested in the compendial “sterility” tests, which only support product sterility by indicating that there has not been a catastrophic breach in the validated sterile processes. A corollary to this assertion is that low level contaminants or nonhomogenously distributed contaminants are unlikely to be detected in the routine tests. For this reason, testing should not be the sole basis on which to assure product biosafety. Appropriate sourcing and quality control of raw and starting materials, adherence to Good Manufacturing Practices, environmental and personnel monitoring, process validation, and finally, testing as verification are the package needed for maximal assurance of biosafety. We address the inherent and necessary connection between process and testing in the context of viral safety margin in Sect. 10.3.

10.2.3.3 Is Anything Missing in the Current Methods?

Besides the obvious answer of yes, it must be noted that the vaccine industry has successfully relied upon the current suite of tests for decades now. Although some viral contaminations have gone undetected (e.g., infectious porcine circovirus in rotavirus vaccine), these examples are few and far between. Most adventitious agents are detected prior to release of product into the clinic or onto the market, and often even before downstream processing has occurred, or at the stage of cell substrate or viral seed qualification, before production has even begun. In the era of cGMP (since the 1970s) and use of well-characterized cell banks for production (since the 1980s), viral contamination events are relatively uncommon. Nonetheless, they are inevitable due to the biological platforms used for manufacturing, and continue to occur despite scrupulous measures to avoid them. Thus, testing strategies must continue to account for newly emerging threats, as well as the commonest of the, albeit uncommon, contamination events.

Further, it should be acknowledged that the current general screening testing methods do not detect numerous animal viruses that exist in nature. As noted above, specific PCR tests fill some gaps, when deemed relevant. Many of these viruses are incapable of propagating in cell culture, thus unlikely to be present after viral vaccine strain development and vaccine production; thus, they have been safely “ignored” after due consideration that they did not pose a viable or significant threat. Table 10.1 illustrates the expected capabilities of the existing routine tests to detect or not detect important representatives from families of viruses, based on a good faith review of diagnostic virology literature—green indicating a generally suitable combination, yellow suggesting either limited applicability or need for unique conditions, and red indicating generally not considered suitable for detection (viral families appear alphabetically).

Table 10.1 Viral families and their potential to be detected by the indicated test methods

The porcine circovirus example is illustrative. The virus is not readily detected in conventional tests, and therefore was able to propagate in a cell substrate without notice. Specialized tests for porcine circovirus might not have been requested previously, since it was probably considered unlikely to propagate in manufacturing substrates, and was not known to be pathogenic to humans. However, while not a significant safety concern, purity of vaccines must also be considered when thinking about adventitious agents. What may appear to be safe may still not be pure or suitable. The PCV event made it clear that regulatory agencies do not want infectious adventitious animal viruses in vaccines (consistent with language about “demonstrable viable” viruses in previous regulations), although remnants of inactivated organisms may be present and may be tolerable as impurities.

Some of the gaps in coverage by the tests listed in Table 10.1 are covered by recommendations for specific PCR and in the case of retroviruses, by a PCR -based reverse transcriptase assay. For instance, neither Hepatitis B nor C propagate in cell culture, and HIV requires particular kinds of cells, which require nonstandard culture media supplements, such as IL-2, to propagate well in culture, or specialized engineered cell lines with appropriate receptors. These culture conditions or specialized cell lines are not reflected in the current tissue culture tests. However, because of the severe impact of such potential contaminants, even the extreme unlikelihood of their presence or inability to propagate in most cell substrates has been deemed by regulators as an insufficient rationale to not test for them, in appropriate settings (e.g., when human cells are used in production). This point introduces one of the concepts of risk assessment discussed in greater detail in Sect. 10.3.

Another issue that challenges any viral test method is that viruses are so highly variable. Variants may occur that are not detectable by a specific test, even though most variants or strains are detectable. Strain differences and even single nucleotide mutations can result in changes in tropism (susceptibility of test system to infection) or viral fitness (ease of infection and/or replication), as well as pathogenicity (readout in in vivo tests and concern for human recipients). Each of these types of changes can result in variants to which a particular test system may become refractory or lose sensitivity .

This ability of the test systems (cell lines or animals/eggs) to detect infections can be affected by species barrier, tissue-specific tropism, possible need to adapt to the culture system, and whether the readout would actually reveal the contaminant, if it was present. For instance, both SV40 contamination of primary macaque kidney cells and PCV contamination of a Vero cell substrate were not revealed as infectious contaminants until tested on either a different cell substrate (SV40 on African green monkey kidney cells) or with a different method (for PCV, using modern genomic testing). In the best case, however, the cell culture and in vivo methods can potentially detect viruses that are not known today, but may emerge in the future, as long as they can infect the systems and produce the same readouts as viruses we currently know (i.e., cytopathic effects, hemagglutination/hemadsorption, death, overt illness, pocking).

Existing molecular methods cannot be taken for granted either. Viral variants are common, and have the potential to escape detection if their sequences do not closely match those for which molecular tests were designed. At the very least, molecular methods should be reviewed regularly to assure coverage of the most recent viral sequences. In the best case, primers and probes developed against conserved regions might be relatively resistant to some of the viral variation, allowing detection of novel strains of viruses that share those conserved regions with known strains.

Another issue that challenges the more historical methods is that they have not been subjected to systematic assay validation as the International Conference on Harmonisation and Pharmacopoeia recommend or require for modern assays—and arguably cannot be validated for all agents for which they might be susceptible. Typical verifications of the compendial methods for cell culture-based assays, for instance, might utilize only a few viruses (not unlike qualification of sterility tests), and in vivo assays have never, until recently (Gombold et al. 2014), been challenged systematically to our knowledge. Compounding the lack of validation of certain conventional methods is the variety of cell and animal strains being used, the variety of culture conditions or inoculation and incubation conditions, and lack of widely accepted standards with which to establish performance parameters (for limit tests, primarily sensitivity and specificity ).

The existing assay methods remain largely unharmonized between the major regulatory regions (e.g., U.S., Canada, EU, and Japan) in terms of exact details of how to perform the tests. The impact of small differences on assay performance is unexplored, such as the inclusion of additional routes of inoculation in the in vivo systems, which one would think could improve the sensitivity , but may paradoxically result in interference, and thus diminution in the sensitivity or specificity . Likewise, reducing the volume inoculated into eggs or changing the age of the embryos at the time of inoculation could be seen as potentially reducing the sensitivity , or improving it, by reducing toxicity effects from the test article. The impact of such test variations on assay performance is unclear and unknown, because the assay performance for any of these methods are generally unknown.

Finally, as new production systems are explored or incorporated into a license for new vaccines, challenges to detect unique adventitious agents will arise. For instance, should we worry about plant viruses or most insect viruses (beyond those that cause vector-borne infections in humans)? Arguments about previous exposure to plant viruses via foods are obviously inadequate since many medicines are injected and therefore bypass natural immune mechanisms. Arguments about potential for recombination and unanticipated consequences can seem theoretical at best and quite speculative at worst. If regulators and manufacturers are operating in an information vacuum, it will be difficult for them to say there is no cause for concern. Furthermore, while plant or most insect viruses, for instance, may not seem like safety issues to human recipients or potentially capable of giving rise to emergent human pathogens, they nonetheless remain an impurity concern. They also have the potential to negatively impact the manufacturing consistency of plant- or insect-based production systems, just as MVM or vesivirus 2117 contaminations have negatively impacted manufacturing of CHO cell-derived therapeutic proteins, causing lengthy and costly facility shutdowns and remediation, and even product supply shortages of important medicines. The current suite of tests, with the exception of TEM , are essentially incapable of detecting plant viruses, and many insect viruses would also be missed unless specific PCR s are incorporated. TEM is the notable exception and in fact, an insect virus contaminant that had been observed in a production insect cell line, was detected by careful evaluation of micrographs of an unusually large number of cells than would typically be examined. But, as this would not be done routinely, this approach could not be relied upon for this purpose.

So, in summary, the current suite of tests, though reasonably robust and largely reliable, have limitations and leave certain gaps and room for improvement when developing scientifically driven testing strategies.

10.2.3.4 Toward Global Safety Standards

Efforts have been made to harmonize viral safety guidance, but the major harmonized guidance [ICHQ5A(R1), International Conference on Harmonisation 1999] does not include viral vaccines within its scope. Major international guidance on viral safety applicable to viral vaccines is available from the US FDA (FDA/CBER/OVRR 2010) and WHO (Petricciani 2010); and from pharmacopeia including the European Pharmacopeia (EP5.2.3 for animal cell substrates, 2.6.16 for viral vaccine seeds and harvests) (EP 2014a, b, respectively), Japanese Pharmacopeia (2011), and likely other similar pharmacopeia from other countries. While the various regulations and guidances are relatively consistent, there are inevitable differences. The extent to which differences are accommodated when new products are registered or existing registrations are updated in different regions is not entirely clear. We cite a few differences here that are relevant to viral safety analytics:

  • EP2.6.16 mandates use of control cells for viral vaccines, whereas FDA and WHO guidances suggest circumstances in which they might be useful, but do not mandate them for all viral vaccines. None of the guidances clarify what is recommended to be done for control cultures of suspension-adapted cell lines used to produce viral vaccines or vectors, or recombinant protein vaccines made in viral-vectored expression systems.

  • WHO guidance and EP2.6.16 specify testing vaccine harvests and seeds for avian viruses using embryonated eggs, only if the vaccine is made in avian cell cultures or eggs, while FDA guidance appears to mandate testing in eggs regardless of the animal cell substrate used for manufacturing. Both EP 2.6.16 and FDA specify a 100-dose equivalent requirement for testing. EP5.2.3 guidance for cell substrates specifies testing any animal cell substrate in embryonated eggs, and there are minor differences in the method description compared with EP.2.6.16.

  • EP5.2.3 and WHO specify 4-week observation of IP-inoculated adult and suckling mice, while EP2.6.16 and FDA specify 21-day observation of IP- and IC-inoculated adult mice (WHO recognizes the IC route in small print). Both FDA and EP2.6.16 specify an initial 2-week observation of suckling mice, but only FDA specifies a blind passage of tissues from surviving mice into another set of suckling mice for an additional 2 weeks (WHO recognizes the FDA option in small print). [The recent work by Gombold et al. 2014, suggests this blind passage does not result in enhancement in sensitivity of the test.]

  • FDA, WHO and EP2.6.16 specify inoculation of guinea pigs (IP all; IC only FDA, but recognized by WHO in small print) followed by observation for 42 days to detect Mycobacterium tuberculosis and evidence of LCMV or other viruses. FDA allows for the test for M. tuberculosis to be replaced by validated in vitro culture and PCR methods (WHO allows shortened culture with PCR endpoint). It is unclear if the 42-day test would still be required exclusively to detect LCMV if another test is performed for M. tuberculosis. EP5.2.3 does not specify guinea pig testing.

Regulators in emerging markets appear to be adopting or adapting WHO or ICH/EP-like guidance or they might develop their own guidance. The development or reevaluation of standards in existing and emerging markets presents a unique opportunity to drive toward global standards for viral safety. This is in keeping with the WHO’s position that whether a vaccine is manufactured in or for a developing country or a developed one, the minimal safety standards must be the same.

10.2.3.5 Innovation

FDA regulations permit substitution of new tests for existing ones per 21 CFR 610.9 (Code of Federal Regulations 2012a), which states that doing so is only permissible when there is evidence that the assurances of the safety, purity, potency, and effectiveness of the product provided by the new method are “equal to or greater than” the assurances provided by the old method or the compendial method. This regulation does not clarify how to go about producing such evidence, but only permits it to be done.

If the evidence can be provided using the same units of measure or if head-to-head comparisons of the methods can be made, it is more straightforward how to develop the required evidence. Sensitivity of the existing cell culture-based adventitious viral tests is defined in terms of an infectious virus input, which is typically qualified by means of spike recovery studies for unique test article matrices. Thus, the LOD is defined in number of plaque-forming units (PFU) or TCID50 (tissue culture infectious dose that infects 50 % of wells in a cell culture assay), i.e., infectious units. Rules around ethics of animal usage and simple practical concerns preclude routine qualification of animal-based tests by infectious virus spike recovery studies.

In contrast, for some of the newer methods, e.g., PCR or any of the methods that detect viral nucleic acids, the sensitivity may be reported in genome copies/reaction or per volume of test article, or some other similar measure. Most viral preparations contain large numbers of noninfectious or defective viral particles in addition to the infectious ones. Those defective particles may contain nucleic acids, but not contribute to propagating an infectious contamination. In fact, residual detectable nucleic acids may be present even when all infectivity has been neutralized or inactivated, or when the viral preparation is intentionally of replication-incompetent or defective viruses. Therefore, there would be no consistent or clear-cut concordance between a quantity of nucleic acid and an infectious unit.

This same problem has challenged the efforts to compare a PCR -based method for detecting mycoplasmas with the standard tests. Some efforts have been made to develop standard reagents that are controlled for the number of genomic copies to infectious colony forming units in order to facilitate comparison and to validate the relative sensitivity of the methods. Might a similar approach be considered for viruses?

10.2.4 Emerging Analytical Capabilities

New capabilities are being developed to detect viruses or their components (e.g., nucleic acids) in response to a variety of needs. Among these needs are the recognition of new or emerging potential threats, recognition of the limitation of existing methods, need for results in less time to support both some new types of products as well as to enable rapid response to actual contamination events, and a desire to reduce or replace animals used for product safety testing. Arguably, much of the impetus and funding for new technology development in the last decade has been related to biodefense and rapid characterization of emerging disease threats. Nonetheless, those involved with biosecurity, public health, clinical diagnostics, and biopharmaceutical adventitious agent testing share some mutual interest in breadth of detection, rapid turnaround, cost control, and where possible, simplicity, and robustness in use.

Biopharmaceutical applications are demanding in terms of compliance issues, bridging to existing methods, and implications of results. True positive results from novel methods have to be evaluated for their implications on safety of products that could be administered to the most vulnerable populations, and both true and false positive results have the potential to interrupt supply of critical life-saving medicines.

This section addresses opportunities for advancement in both existing and emerging methods. Some general issues are presented as well as alternative approaches for implementing novel methods.

10.2.4.1 Improving Culture-Based Detection by Improving Conventional Readouts

We begin by only briefly acknowledging some opportunities to improve on the current cell culture-based assays, which might arguably include standardizing the readouts, improving time to detection, and/or enabling sensitivity to viruses that are not otherwise readily detected.

One approach to improving the current culture-based assays would be to standardize the microscopic visual readouts (CPE , hemadsorption, immunofluorescence) by use of machine vision coupled with pattern recognition algorithms . Standardization could reduce analyst-to-analyst variation, reduce the training burden (especially in high turnover laboratories), and potentially increase throughput while reducing labor costs. Rather than spending labor time reviewing the overwhelming proportion of normal (negative) cell sheet surface, labor time could be focused on verification of the much smaller proportion of questionable features in cell monolayers identified through pattern analysis algorithms. But despite progress in imaging, robotics, and analysis, off-the-shelf systems might not currently suffice for broad application in adventitious virus quality control testing. Assembling custom systems that have this capability presents formidable challenges, including but not limited to complexity of automation, rapid acquisition time, depth of field and focus (and thus clarity) in plastic vessels of differing dimensions, image storage and analysis time, and compliance with 21 CFR Part 11 (Code of Federal Regulations 2012b) requirements for computer-based systems. The business case is challenging as well, given the cost and time that would be required for implementation, the variable track record for successful automated image-based applications in quality control, the rapid obsolescence of technologies in the context of license/marketing authorizations in which details of testing are documented, the possibility of only limited reduction in laboratory FTE requirement, and perhaps also the fact that the diagnostic virology community is moving toward more convenient and often nucleic acid-based measures of viral infection.

Another approach to improving existing methods for virus detection might be use of customized cell substrates or culture conditions. There are numerous descriptions of reporter cell lines developed for specific viruses (a few examples, HSV, CMV, VZV, BIV, Herpes B virus, alphaviruses, influenza, rubella) usually developed for the purpose of investigating or verifying specific viral infections. However, useful for specific viruses in clinical settings, the promise envisioned in an excellent review (Olivo 1996) has not benefited biopharmaceutical testing. Others have considered recombinant cell lines overexpressing antiapoptotic genes or primary cells treated with apoptosis inhibitors as means of enabling greater viral replication, and thus enhancing the detectability of transforming viruses (Sandstron and Folks 2001 and references therein). Improved sensitivity or breadth could also be accomplished by choice of cell line as well as selection of clones with better assay characteristics. For instance, Gombold et al. (2014) demonstrated differences among cell lines used in conventional assays, suggesting for instance that HeLa and A549 cells were more sensitive than MRC-5 and Vero for adenovirus 41 and rhinovirus, although the A549 cell line was dramatically more sensitive than HeLa for adenovirus type 5. One company serving the diagnostic virology community commercializes prepared cell lines and mixtures of cell lines for more rapid detection of specific categories of viruses. Leland and Ginocchio (2007) reviewed many of these improvements relevant to the clinical diagnostic laboratory.

Few, if any, of these emerging capabilities have proven practical for biopharmaceutical adventitious agent testing. Conventional biopharmaceutical testing laboratories may not be inclined to take advantage of the more rapid readouts for narrower ranges of potential viruses because of the considerable increase in logistical complexity and cost of managing additional lines with only marginal added scientific value. Furthermore, the cost and complexity of qualifying numerous additional cell lines would represent a barrier to their implementation. Finally, an improved indicator cell line might well be suited to more rapid detection if the virus is present, but the duration needed to confidently call a result negative might not be easily changed from the 28 days now typically required. The fact that the vast majority of tests would result in negatives means that these approaches might not really improve the speed at which material might be released to the market, clinic, or for further manufacture. And the increased number of cell lines used can increase the potential for false positives, leading to re-tests and lengthy investigations , actually delaying release, without necessarily enhancing safety.

10.2.4.2 Advances in Alternative Detection “Readouts”

Several technologies are opening the possibility of expanding the breadth of detection beyond that of the conventional tests, even encompassing all known and even possibly as yet unknown viruses. This possibility is so compelling that we are forced to ask whether these new methods are capable of providing greater assurance of biological safety than the conventional methods. Certainly in terms of “specificity ” (i.e., breadth) they could, but whether their level of sensitivity would be adequate needs to be explored. There are data to suggest that some of these readouts might be less sensitive than a specific PCR , but the breadth is clearly far greater.

The advances that will be considered here are perhaps best described as alternative approaches to detecting viral nucleic acids or transcripts, and viral proteins. Rather than attempt a comprehensive review of the extensive scientific and commercial literature, we will attempt to capture the essence of the technologies that have shown (or arguably could show) promise in applications closely related to biopharmaceutical testing and explore principles that will be essential in their standardization and qualification for regulated testing. A more detailed review of some of these methods is being prepared by a task force of the Parenteral Drug Association, for publication in 2014 and in proceedings to be published by the PDA of a conference held in Nov. 2013 on advanced detection technologies. Some of these methods are already being used for assay investigations and as part of viral risk assessments.

10.2.4.2.1 Next Generation Sequencing

Today, there are several mature and emerging platforms for nucleic acid sequencing for which excellent reviews are available (Niedringhaus et al. 2011; Glenn 2011; Metzker 2010; see Kolman and Onions this volume). The so-called “next generation” sequencing technologies differ from classical DNA sequencing by making available the individual nucleotide sequences of every template fragment analysed, rather than a single most represented sequence of a population of fragments. Next generation sequencing platforms differ in the “read” lengths they generate, the practical depth per nucleotide, inherent error rates, cost, turnaround time, and flexibility for other analyses. For instance, de novo assembly of reads is arguably easier with reads that offer longer potential overlapping sequences. Thus, the sequencing technology chosen for the purpose must be considered in terms of its capability to meet the needs of the intended use, as the differing technologies have different capabilities.

Generation of the raw sequences for a population of fragments is only the first step in the use of this technology to detect and identify potential viruses. Analysis algorithms are applied in which the sequences might be directly searched against viral databases, or further processed to improve quality of “hits” (i.e., by de novo assembly and/or translation in all reading frames prior to searching). The particular sample might contain cellular DNA and/or RNA, which affects both sequencing and analysis. The more nonviral nucleotides that are present, the more sequences that must be generated and analysed to detect the potential viral sequences.

So approaches at the level of sample preparation and/or data analysis may have to be incorporated to either minimize these sequences (e.g., ribosomal RNA) or subtract signals from the population of sequences. However, an arguably better approach is to positively select potential viral sequences by matching against databases, which might require more sequencing and computational effort, but reduces the chance of systematically eliminating potentially meaningful sequences.

Once lists of virus accession hits are generated, these must be triaged, if the viral database has not been rigorously curated. For instance, it is common to hit certain viral accessions that also contain cellular (nonviral) sequences. If only the nonviral sequences were hit in such a database accession, these can be regarded as false positive hits. Evaluating the relevance of hits also requires considering the depth and coverage of reads. Very narrow coverage of a viral gene or genome might be explained by residual nucleic acid, for instance from expression vectors used to make some of the biological reagents used in the method. Coverage of a full viral gene or genome might be consistent with a viral transcript or intact virus. Inferring biological relevance to positive hits depends on the sampling scheme, sample preparation, and suite of controls; often additional and orthogonal methods are needed to evaluate potential positive results. Thus, extensive bioinformatics and virology expertise are needed to process the data.

Sequencing approaches that do not rely on predefined targeted primers are relatively unbiased in the sequences that can be generated. If identifications are only accomplished by matching to existing viral databases , however, there is some potential bias contributed by the breadth and depth of the database. Truly novel sequences that do not match well against the existing database might not be recognized as representing a novel virus unless they are assembled without scaffolding against an existing viral reference sequence. Thus, the unbiased potential of next generation sequencing is only realized when the possibility of novel viral sequences in unmatched read population is addressed. The turnaround time, in our experience, for a study from extracted nucleic acids to final analysis has typically been at least several weeks.

Next generation sequencing has been used effectively to characterize live virus vaccines for potential unexpected viral and microbial sequences (Victoria et al. 2010) and is now available as a commercial service specifically for virus detection at contract research laboratories. Next generation sequencing has been applied after other similar sample preparation strategies (de Vries et al. 2011) as well as after highly multiplex amplification of numerous potential viral targets (Hall et al. 2012).

Efforts will be needed toward standardization of databases and viral spikes for assessing sensitivity , as well as data-sharing in order for practitioners and regulators to converge on the most meaningful sample preparation and analysis strategies.

10.2.4.2.2 High-Density Microarrays

Detection/identification microarrays use oligonucleotide probes designed to hybridize with known viral sequences, typically at multiple sites across the viral gene or genome. The probe design operation is performed for as many viruses or viral sequences as desired—the highest density chip to date covers all sequenced viruses, bacteria, and fungi and incorporates ~388,000 probes (Munroe 2011). Amplification strategies have been applied prior to array analysis to enhance sensitivity (Erlandsson et al. 2011). In contrast to de novo sequencing, the bioinformatic analysis of arrays is essentially performed prior to ever running a sample on the array—in the probe design phase. After a sample is actually run on the array, an analysis algorithm calculates the probability of a positive hit for viruses based on factors like signal intensity, coverage across multiple targets within the same virus, and hits to closely related viruses. Access to such arrays is now available on a fee-for-service basis from at least one commercial testing laboratory. The turnaround time from the random amplification/labeling reactions to final results can be about one day.

Re-sequencing arrays are designed with a narrower goal—that is to confirm the nucleotide sequence of a narrower range of viruses or viral genes. Importantly, their objective is to evaluate the sequence of targeted regions by using probe sets that present, for one base position at a time, all four possible bases at that position. These arrays are typically designed for pathogens of interest, for instance in biodefense (Leski et al. 2011) or specific public health situations (Berthet et al. 2010) where detailed sequence information is needed very rapidly in order to formulate a response. The utility of these types of arrays for adventitious agent testing, where the goal is detection, is less clear, though they may be useful in manufacturing investigations. Such investigations are triggered when an adventitious agent has been detected. Investigations are an important component of quality assurance to aid in identifying the source of a contaminant and suggest potential corrective and preventive actions, and in this manner, arrays may be useful for biologicals production.

10.2.4.2.3 Mass Spectrometry (MS)

Protein mass spectrometry by MALDI TOF has been quite effective in identifying bacteria biologically amplified in culture and even from clinical samples (Wieser et al. 2012). Improved workflows are even enabling liquid chromatography (LC)-MS/MS -based strain typing (Karlsson et al. 2012). Application to detection of viruses, however, is limited by sensitivity and complexity of typical viral samples. Recent reviews of progress in proteomic analysis of viruses and virus-host systems demonstrated the increasing capability of MS combined with separation techniques in rapidly characterizing viral and virus-host interactions from relatively complex samples (Zheng et al. 2011; Zhou et al. 2011). Despite advances in sample preparation and analysis, proteomic studies have focused on a relatively small number of specific viruses—HCV, dengue, HIV, influenza, SARS, RSV, and a small number of others, as reviewed in Zheng et al. 2011 and Zhou et al. 2011. The studies these authors reviewed were primarily directed at understanding pathogenesis and biomarkers of infection and other characterization of known viruses, rather than detection and identification of unknown viruses in samples.

Some investigators are, however, exploring the detection of unknown viruses in complex samples as a potential diagnostic tool. Ye et al. (2010) detected vaccinia proteins in cultured human lung fibroblast cells infected with an unidentified viral culture isolate when the infected cultures exhibited ~60 % CPE . Sample preparation included detergent lysis and clarification of a supernatant fraction from cell pellets, cleanup, and protein separation by either 1-D or 2-D gel electrophoreses, cutting out bands or spots of interest, in-gel digestion, and analysis of peptides by LC-MS/MS . The authors also explored use of multiple proteases for in-gel digestion, which increased coverage of proteins detected from the 2-D gel preparations to as much as 89 %.

Konietzny et al. (2012) recently demonstrated the first detection of BK viral proteins from urine by LC-MS/MS tandem mass spectrometry, though after what was described as a slightly complex differential centrifugation/ultracentrifugation/filtration protocol to enrich for viral proteins. Observed peptide sequences were searched against a customized protein database. The method distinguished subtypes that could correlate with differing clinical significance. Importantly, algorithms are being developed to analyze the complex data from MS -based studies of complex samples, even whole viral digests, particularly for influenza: FluAlign and FluGest (Schwahn et al. 2009); FluTyper (Wong et al. 2010), and FluShuffle and FluResort (Lun et al. 2012). Application of proteomic approaches to viral detection will depend primarily on sample preparation workflows and separation techniques to improve sensitivity . It is less likely that hardware and analysis algorithms will be rate-limiting to this application. Other factors likely to affect wide adoption, as the technology matures, include initial cost of systems and expertise to run and manage them.

In contrast to the MS -based proteomic approach, analysis of short regions of nucleic acid sequences specifically amplified from samples have been very successfully applied to detection and identification of viruses and other agents (a few recent articles include Chen et al. 2011a, b; Deyde et al. 2011; Sampath et al. 2012). This technology is commercialized and finding some application in the biopharmaceutical arena (Sampath et al. 2010). The database against which observed amplicon masses are compared is proprietary, but is curated from public databases. This approach can be used for viruses, bacteria, fungi, and mollicutes. The claimed sensitivities, as reported in the literature as limits of detection (LOD) of the method, are somewhat dependent on the matrix of the specimen. The claimed LODs are in the range of 100–10,000 copies/mL for viruses (Chen et al. 2011b), on the order of 1,000 copies/mL for bacteria and fungi (Ecker et al. 2010), and as low as 5 copies/mL for mollicutes (Lawrence et al. 2010). The turnaround time for analysis from extracted nucleic acids to result can be about one day.

10.2.4.2.4 Other PCR -Based Methods

A variety of additional PCR -based methods have been evaluated for detection and identification of narrower subsets of viruses. Several multiplex approaches for detection of respiratory pathogens were reviewed in the context of clinical diagnostics, each with different mechanisms to detect the amplified signal (analogous to the ESI-MS approach noted above, Callendo 2011). These methods are narrowly scoped for a specific clinical application. A novel approach to very highly multiplexed PCR , with a next generation sequencing readout, has been demonstrated (Porreca et al. 2007; Li et al. 2009; Kozal et al. 2012). This technology has been developed as a means of enriching the population of sequences for targets of interest. Since the targeting oligonucleotides are short (<100 bp, typically), must be designed based on known sequences, and would need to be tiled across at least numerous conserved targets in viral genomes of interest, this strategy would share the sequence specificity and design considerations of both conventional PCR and microarrays, coupled with the turnaround time and analysis considerations of next generation sequencing.

An alternative approach, using degenerate oligonucleotide primers without prior sequence knowledge of the specific viral target, offers promise as a near-universal assay (Uhlenhaut et al. 2009). In this case, amplification used an oligonucleotide primer demonstrated to detect a range of viruses (Nanda et al. 2008), and discrete amplification products were isolated from gels, cloned into sequencing vectors, and amplified in bacteria. Identification was accomplished by conventional sequencing. This procedure could be challenging if there were numerous amplification products. Use of next generation sequencing as the readout after degenerate amplification, instead of cloning and conventional sequencing, might be a compelling alternative (McClenahan et al. 2014). Its relative analytical performance in comparison to other broad detection methods needs to be established, and consideration needs to be given toward how a large panel of degenerate primers could be controlled and implemented for regulated testing.

10.2.4.2.5 Using Antibodies

Novel signal amplification approaches have been used with specific antibodies. For instance, oligonucleotides can be conjugated to monoclonal antibodies enabling PCR -based detection of bound antibody. This method detected rotavirus with ~1000-fold greater sensitivity than an optimized ELISA , and with clear separation between positive and negative stool samples even after 104-fold dilution of the positive stools (Adler et al. 2005). This method has also been applied to detection of the pathogenic isoform of prion protein in bodily fluids, with a claimed sensitivity based on recombinant PrP spikes to 10 pg/mL (König et al. 2006).

Other approaches use antibodies as part of a separation system to enrich for specific viruses that can then be characterized by other methods. For instance, Chou et al. (2011) used a specific monoclonal antibody conjugated to magnetic nanoparticles to concentrate influenza particles from allantoic fluid of embryonated eggs, and detected them to a sensitivity of ~103 ID50 influenza virus per mL using mass spectrometry. However, given the narrow breadth of detection, these approaches as solutions may not be suitable for broader viral safety testing for biopharmaceutical processes. Consideration needs to be given to the sensitivity of prion detection in biologicals by this approach however.

10.2.4.3 Some General Issues for Emerging Detection Methods

An important aspect of analytical validation is the demonstration that the method is suitable for its intended purpose. Here we focus on a brief characterization of features we consider important in determining “fit” for biopharmaceutical applications of the nucleic acid-based detection technologies. Most important of these features is clarifying the difference between a method designed to detect and one designed to identify in the context of adventitious agent testing. Both are important, but detection is paramount for a limit test for impurities.

10.2.4.3.1 Detection Versus Identification

Especially in the context of the emerging methods, the difference between identifying and detecting adventitious viruses needs to be clarified. Validation requirements differ between which performance parameters must be characterized for an identity test or for a limit test (for detection of impurities) [ICH Q2(R1) Validation of Analytical Procedures] (ICH 1996). An identification test must be specific—capable of distinguishing related viruses at a definable level of difference. But any identification assay also has an inherent and definable level of sensitivity , though when used exclusively for identification, an assay’s sensitivity has not traditionally been required to be defined. In fact, identifications are typically performed only after isolating or significantly enriching the particular contaminant.

In sharp contrast, sensitivity is the critical attribute of a detection assay (limit test for impurities ), although specificity also needs to be defined. As a result, assays to detect viral agents have historically tended to evaluate relatively larger volumes or amounts of test article to achieve greater sensitivity . A further complication for establishing comparability of new methods is that detection assays have historically been calibrated in functional units (infectious units per volume of virus preparation based on a cell culture or animal-based infection system). Functional units typically vary from one infection system to another, and units do not necessarily equate with (though should not exceed) total viral particles or viral genomes. Specificity is also a required attribute of a detection (limit) assay, and is arguably of greater importance for novel assays where detection is based on recognition of a specific nucleotide sequence or protein “signature.”

Thus, when considering whether a novel method is suitable to replace or supplement an existing detection assay for adventitious agents, the sensitivity of the novel method needs to be considered. Some of the new methods which are inherently capable of identification also lend themselves to alternative sample preparation methods that can enhance sensitivity , making them arguably suitable as detection methods.

10.2.4.3.2 Preparation of Nucleic Acids for the Readout

Viral nucleic acids may be composed of DNA or RNA. Starting from the simplest approach, total nucleic acids from a sample can be isolated and introduced either directly or after amplification steps. If nucleic acids from production cells are extracted, then cellular nucleic acids (genomic DNA, ribosomal RNA, cellular transcripts) could predominate and potentially limit the amount of cell equivalents or sample volumetric equivalents that can be introduced into subsequent reactions. On the other hand, cellular DNA allows detection of proviral sequences, which may be desirable. Total nucleic acids, or total RNA or total DNA, can represent viral genomes or potential transcripts (in the case of RNA) that are being produced within the cell—whether or not encapsidated. Messenger RNA (transcripts) reveals viral gene expression within a cell, suggesting active infection. Enrichment for mRNA (albeit both cellular and some viral) can be accomplished by annealing to beads coated with short poly-T sequences, since most mRNAs are polyadenylated at the 3′ end. However, some virus families do not polyadenylate mRNA, so would not be recovered by this approach (notably Flaviviruses, Reoviruses, Bunyaviruses). Where RNA is isolated, it is converted to cDNA prior to hybridizations or amplification steps. Total nucleic acids are well suited to detection of known virus sequences by amplification using specific or degenerate primers; random amplification may be less useful, since all nonviral sequences would also be amplified. See Fig. 10.1.

Fig. 10.1
figure 1

Sample selection and preparation determine what can be detected

Total nucleic acid recovery from samples complicates how sensitivity is defined and demonstrated. Such methods have the ability to recover nucleic acids from intact viral particles, cell genomes, and transcripts. Spike recovery experiments to establish sensitivity would arguably need to address each of these possibilities, though perhaps a worst case could be used in routine testing (for instance, an RNA virus spike since it would reflect release from viral particle as well as recovery of less stable RNA).

A common critique of total nucleic acid extractions is that they can recover nucleic acid remnants remaining after inactivation treatments of medium components, raw materials, and other reagents during sample preparation. Therefore, a complementary approach is extraction from intact viral particles. Intact viral particles with intact genomes represent potentially infectious agents. Extraction of nuclease-protected DNA and RNA recovers viral genomic nucleic acids, but could also recover some histone-protected cellular chromatin and/or free nucleic acids left by declining activity or below the affinity of the nuclease enzymes. These nuclease-resistant nucleic acids from cells, culture supernatants, or raw material fluids are well-suited to random-primed amplification steps.

Additional treatments can be applied to concentrate potential viral particles as a means of enhancing sensitivity , such as ultracentrifugation or ultrafiltration. Where used, the efficacy of these methods for a range of viral particles needs to be demonstrated.

10.2.4.3.3 Database

The nucleic acid- and proteomic-based methods all rely on the arguably reasonable assumption that the next unknown virus will share at least some sequence similarities with known viruses for which sequences exist in the accessible databases. Coverage of sequences for a given virus, level of annotation, and curation vary in the public databases (EMBL in Europe, GenBank in USA, DDBJ in Japan). Curated reference genome databases (subsets of the public databases) provide reliable benchmarks, but do not necessarily represent the genetic diversity that might be available from partial sequences present in public databases. Private databases might be held within organizations that develop technologies, perform epidemiological surveys, or provide testing services. Such private databases might include data from public databases. Importantly, one must keep in mind that databases are not static because new sequences are added on an ongoing basis. One dilemma that the scientific and regulated testing community will need to address is whether or how to incorporate novel contigs assembled from next generation sequencing data, without viral isolation, into databases.

These critical characteristics of databases present a significant challenge not only to technology developers but also consumers of services based on information in the databases, and to regulators evaluating the data generated. The dynamic nature of databases makes PCR primer/probe design, array probe design, and any sequence searches subject to the version of database used at that point in time. Constant updating also presents a challenge for validation of assays and the frequency and timing of revalidation.

To the extent that private databases are generated and used for development of or application to adventitious agent detection technologies, there will be a need to scientifically validate them, and establish versioning and change control mechanisms for routine updates. Of course, not only must the databases be kept current, but methods based on them (for instance probes, PCR primers) must be re-evaluated to assure that they are current. When updates to databases render previous assays or searches out of date, users of the information may need to be informed of the update and the implications of changes. Users of these services might need to develop policies and procedures to determine when and how reanalysis is performed. Consequences of retesting could range from expenditures of time and resources investigating signals that turn out not to be true risks (and possible clinical or market actions in the meantime), to detection and identification of previously unrecognized infectious contaminations that tip the risk/benefit balance of the vaccine. Reanalysis of released materials goes against current quality assurance principles and so policy consideration needs to be given to this dilemma.

10.2.4.3.4 Sensitivity , False Positives , and False Negatives

Sensitivity viewed from the perspective of the readout alone might be described, for example, in copies per reaction for PCR - or hybridization-based assays. However, for biopharmaceutical applications, sensitivity needs to be interpreted in the context of the test article, which could be a volume of fluids or whole culture, or a pellet of cells from a culture or numbers of doses of the final product. Translating sensitivity back to the test article helps establish a bridge to the conventional methods. But determining what measure of sensitivity to apply for novel methods is not simple. Importantly, we must acknowledge that the readouts are not directly equivalent between conventional methods and novel, especially molecular, methods. Following are some issues that complicate equivalency arguments and influence how potentially positive and negative signals must be interpreted:

  • not all viral genome copies are necessarily associated with viral particles, particularly in materials that have been subjected to inactivation or sterilization procedures

  • not all viral particles contain viral genomes, although those that do not would not be infectious (but could complicate protein-based detection methods)

  • not all viral particles with genomes are necessarily infectious

  • some, but not all, viral genomes (if complete or largely so) in the absence of viral particles can be infectious or potentially oncogenic

  • transcription of limited sets of viral genes does not necessarily reflect productive infection, although this signal might indicate an abortive infection, which could be of regulatory concern

  • some reported viral sequences in public databases also contain nonviral sequences

It might appear that molecular methods are fraught with potential traps. However, this is not unique. Similar complications exist with conventional methods:

  • infectivity will likely vary with cell substrate, and method of cultivation, and even method and route of inoculation in vivo due to varying tropism

  • an infectious unit is not necessarily equivalent to a single intact viral particle with an intact genome

  • viruses adapted to a cultivation system (for example, the manufacturing culture) could be easier to detect in cell culture-based infectivity assays than those present in an animal-derived raw material, which have not been exposed to in vitro cultivation

  • infectivity in vitro might not reflect infectivity in vivo (either in assays, or as risk to the human receiving the medicine)

  • infectivity may not equate to pathogenicity in the in vivo test systems or in human recipients of contaminated biologicals and may not equate to cytopathic effect in the tissue culture test systems used

Clearly, a complete and systematic validation of the sensitivity of conventional methods for all potential culture-adapted and wild viruses simply cannot be done. Rather, judgments must be made for what is reasonable, practical, and represents scientific best practice to assure biosafety. For instance, the sensitivity of the novel methods can be characterized in their native units, such as genome copies, using appropriate spike controls as summarized in Fig. 10.2.

Fig. 10.2
figure 2

Viral target determines the appropriate controls

Another approach might compare earliest or lowest concentration detection between novel and conventional detection systems following infection of production or detection cells with panels of viruses with known infectious titers.

Thus, we argue that a reasonable starting point for establishing sensitivity of novel methods can be defined according to a few simple principles: See Fig. 10.2.

  • If the novel method purports to detect viral nucleic acids from viral particles, then the effectiveness of the entire procedure must be demonstrated, starting with virus spikes representing both RNA and DNA genomes and various particle structures.

  • If the novel method purports to detect proviral or latent viral DNA from preparations of total cellular DNA, then the effectiveness of the procedure for detecting levels of spiked viral or proviral nucleic acids must be demonstrated. Whether the spikes should be in the same form as the DNA intended to be detected (e.g., integrated into cellular DNA in the case of proviruses) needs consideration.

  • If the novel method purports to detect viral mRNA (transcripts), then the minimum detectable level of spiked RNA sequences must be demonstrated, bearing in mind potential bias that could be introduced by some mRNA purification methods.

  • Detection of an absolute physical attribute of a virus (like a sequence) can be related to a relative measurement of potential infectious virus or virus risk by considering the totality of data—breadth and depth of sequence detected, identity, pattern of detection in the various controls (such as the corresponding medium or time-zero samples), and evidence of amplification when inoculated into uninfected cultures, for instance. Unfortunately, there is no simple relationship applicable to all viruses, or even all preparations of a single virus, that allows us to say that a certain amount of gene copies equates to a certain amount of infectious virus or virus risk. While one could argue that the worst case assumption is one gene copy per infectious unit, this is not the reality for most viruses.

Comparison of novel and conventional methods can perhaps be approached as follows or by variations on these:

  • Spike relevant test article matrices with infectious virus standards and determine the lowest concentrations (i.e., highest dilutions) that can be detected in either production or detection assay cell systems. Virus stocks calibrated by both infectivity and genomic methods would be useful, as they help define comparability from batch to batch of the reference standard, and they help evaluate the molecular method in units relevant to the performance of those methods.

  • Sample infected cultures through a timecourse, or use mixtures of infected and uninfected cultures resulting in differing ratios of infectious virus to background material, and determine the earliest timepoints and/or highest dilutions at which the adventitious agent is reliably detected in each method. This approach does not necessarily require calibration of the viral stock by both methods, although that calibration would still be useful.

Closely related to method sensitivity is the concept of a false negative result. Once the limit of detection for a method is defined, a false negative result is one for which a true contaminating agent was present at levels that should have been detectable, but went undetected. False negative results suggest a systematic error that prevents the detection of agents that could be present. Extra controls may be necessary for some time to establish reliability of all steps in the methods across changes in analysts, reagents, kits, test article lots, as well as database and software versions (the same could be accomplished through stringent change control procedures). Of course, the objective of these controls is to demonstrate lack of inhibition of spike recovery at or near the limit of detection of the method.

False positive signals can lead to unwarranted effort for follow-up investigations. There are at least three types of potential “false positives”—those due to lack of specificity , those due to contaminated reagents but unrelated to test article, and those that otherwise do not support a compelling assertion of presence of virus. The last of these is considered further in Sect. “Evaluating Positive Signals.” A failure of specificity can result in a signal being interpreted to represent a specific virus or virus sequence, when in fact it does not. Such signals can occur in the new detection systems by several mechanisms, and each of these needs to be controlled for the systems to be reliable. Targeted primers or probes may not be designed with necessary specificity , or reaction conditions that affect specific annealing might not be well controlled. There may be cellular analogs to some viral sequences. The accessions used in defining primers and probes, or evaluating sequence reads, may themselves be contaminated with cellular or vector sequences, especially if partial viral sequences or noncurated full genomes are used. There may also be considerable similarity among accessions in some sequences, which can result in signal for many accessions that do not represent the best identification. Microarray and PCR /Mass spec software incorporate complex algorithms to determine how much signal (depth and breadth, so to speak), is needed to assert a compelling positive detection. Such algorithms are yet in development for sequencing-based systems. Likely, as the ability to assemble short sequence reads into longer contigs improves, the confidence in positive hits and best hits will also improve. Reagents themselves can contribute signals due to residual nucleic acids from their preparation—notably even silica used in extraction kits (Smuts et al. 2014). But inevitably, some positive signals cannot be easily ascribed to errors, lack of specificity , or reagents. We differentiate these kinds of positive signals, and explore some principles for their further evaluation in Sect. “Evaluating Positive Signals” below.

10.2.4.3.5 Evaluating Positive Signals

Novel methods are susceptible to false positive signals in some interesting ways, beyond ordinary cross-contaminations. As a result, study design may be very important for interpretation of signals from molecular-based novel methods. Understanding the landscape of typical contaminants will help discriminate unusual ones. One means of doing this is by including various control samples including several negative controls. Understanding the manufacturing process for the vaccine and the various inputs is also critical to evaluating signals—for instance, where process interventions fully inactivate viruses but do not necessarily remove the viral sequences. Interpreting signals across multiple assays (for instance, total RNA as well as viral particle preparations) or even multiple samples can also help reveal viral amplification. For instance, an increase in viral signal with time in culture could strongly suggest productive infection. Positive results in transcriptome studies would be suggestive of infection and gene expression, even if not direct evidence of viral replication (especially in the case of latency transcripts). Mass balance estimates based on input and output copies can be useful for manufacturing cultures exposed to viral sequences and/or particles via raw materials of animal origin.

The observation of porcine circovirus sequences in a rotavirus vaccine by next generation sequencing and confirmation by microarray were strongly suggestive of infection, since there was considerable depth as well as complete breadth of viral genome coverage. But these methods, as applied, did not completely prove presence of infectious virus in the vaccine. Proof was gained by evaluating predecessor materials and process inputs, and establishing assays for infectious porcine circovirus (http://www.fda.gov/AdvisoryCommittees/CommitteesMeetingMaterials/BloodVaccinesandOtherBiologics/VaccinesandRelatedBiologicalProductsAdvisoryCommittee/ucm197728.htm Accessed 13 Feb 2014). Recently, the World Health Organization has reviewed procedures by which regulatory agencies should evaluate positive signals in molecular (or any new) assay for adventitious viruses in biological products. Their recommendations have been adopted by their Expert Committee on Biological Standardization in Oct. 2014 and are anticipated to be published in 2015.

10.2.4.4 Implementing Novel Readouts

This section will address the manners in which novel readouts might be incorporated into quality control testing for adventitious agents, recognizing that many of these methods are already being used as part of viral risk assessment or in investigations. The options are relatively simple: readouts can be applied directly to samples of interest without opportunity for biological amplification in production cells or analytical indicator cells or animals, or they may be applied after these opportunities for biological amplification. The following table illustrates the potential applications of novel methods in relation to conventional methods, and illustrates the potentially broader utility of the novel methods (left side in Table 10.2). Importantly, the various readouts do not necessarily reflect the same biological properties, and thus relevance. Detection of nucleic acids may or may not reflect presence of an infectious virus, but only remnants of inactivated ones. Thus, the biological relevance of particular readouts is determined in large part by how and to what the novel detection method is applied.

Table 10.2 Sample and readout compatibility

We will refer to direct tests as those that are independently applied without a biological assay culture system, and hybrid tests as those with an initial biological amplification system. The suitability of any given readout, especially for direct testing, depends on sensitivity , sample preparation, and susceptibility to interference and false positives. For instance, immunofluorescence is not applied as a direct test but rather only after inoculation of cell cultures due to the need to biologically amplify and spatially concentrate signal for detection by microscopic examination. Conceptually, hemagglutination could be applied directly to test articles such as liquid raw materials without previous biological amplification in cells or animals, but to our knowledge is not, undoubtedly due to inherent low specificity , interference and/or insensitivity.

Adoption of a novel readout as a direct test to replace any conventional testing will likely require a very strong case for suitability (equivalent or better than existing methods, or at least arguably providing an acceptable safety margin where direct equivalency is not readily interpretable). Adoption of a novel readout as part of a hybrid assay (after biological amplification in analytical indicator cells or animals) will likely require a very strong business case to justify the added expense in addition to the scientific case for comparability (equivalent or better than existing methods).

10.2.4.4.1 Direct Testing with the Novel Readout

Direct testing of raw materials, media, manufacturing cultures, or fluids presents challenges of biological relevance and sensitivity . Detection of signal by most novel readouts does not necessarily prove presence of an infectious or relevant adventitious agent, but rather would be the starting point of an investigation into relevance of the signal. Given the high probability of detecting signal(s) of animal viruses in animal-derived raw materials even after they have been subjected to inactivation procedures, such as gamma irradiation, quality control, or release tests representing biological function might seem most appropriate. However, direct testing with novel readouts could increase the understanding of the landscape of potential agents to be considered as potential safety risks, and for which suitable assays for biological function might be needed. Few reports exist in the scientific literature of systematic surveys of animal-derived raw materials by the novel readouts for potential agents of concern, though it is likely that companies and technology developers are privately pursuing this objective.

Additionally, anything less than 100 % sampling and testing of a raw material is inevitably insufficient to assert that the material is absolutely free of detectable contaminants. Current testing paradigms rely on verification tests of animal-derived components coupled with validated processing of those components to reduce risk. It is also likely that improved sample preparation workflows will be needed to achieve meaningful levels of sensitivity for direct tests by the novel readouts. But most compelling is the increased breadth of detection that the novel readouts promise compared with the conventional tests.

Direct testing with the novel readouts could also be used to support in-process decision-making or to replace conventional tests on production cultures and harvests. The wider breadth of detection would arguably provide greater assurance that unexpected agents have not propagated in any given manufacturing culture. Appropriate controls, such as uninoculated complete medium or a “time zero” culture sample, would be essential to support that detected signals actually show an increase during the culture step (and should have been shown previously to be undetectable in the cell substrate). We address additional nuances of sensitivity of such tests in Sect. 10.3, in the context of the viral safety margin.

10.2.4.4.2 Hybrid Test Using Novel Readout

Novel readouts could be effectively applied in hybrid assays, just like conventional readouts. As used here, hybrid assays are those in which the test article has been exposed to a conventional or nonconventional analytical cell or animal system in which biological amplification of the agent could occur, and a separate nonconventional readout is applied to detect the agent. A conventional cell culture-based assay might be validated to detect “100” infectious units of a virus (whether defined as TCID50 or PFU). But the readouts typically do not actually detect the initial 100 units, but rather the effects of virus growth started with those 100 infectious units. The biological test system facilitates amplification of the virus inoculum, initially undetectable in the conventional readout, to levels of virus or impact on infected cells that are readily detected by the conventional readout. The same would be true of a novel readout. In fact, detection of the increase in genome copies of porcine circovirus 1, which did not generate CPE , was key to its detection in an infectivity assay (presentation by Krause, http://www.fda.gov/AdvisoryCommittees/CommitteesMeetingMaterials/BloodVaccinesandOtherBiologics/VaccinesandRelatedBiologicalProductsAdvisoryCommittee/ucm211828.htm Accessed 13 Feb 2014). Hybrid approaches have also been proposed for mycoplasma testing (Chang et al. 2006; Kong et al. 2007). Thus, novel readouts could be applied to analytical cultures to enhance detectability of viruses and other organisms.

One consideration that may be given to the validation of the hybrid test is that only the novel readout would require validation, as the biological amplification would occur as it is already performed by the compendial methods (which are not validated, and arguably cannot truly be validated). This would not necessarily be the case for a nonstandard biological amplification method, but should be a suitable approach for the compendial methods.

The challenge with hybrid assays is that they incur the logistics and cost of both the biological amplification step and the molecular readout. It is perhaps most logical to develop a hybrid test approach only once it is concluded that the direct test approach is inadequate.

In the next section, we consider a rational strategy based on assessment and selection of inputs, design of manufacturing processes, and testing. These steps illuminate how test sensitivity fits into the overarching viral safety margin, and thus how it supports the scientific and business case for selecting a novel testing strategy.

10.3 Principles of Rational and Scientifically Based Testing Strategies

Viral safety assurance is best described as a confidence-building exercise with multiple contributing factors rather than as a quantitative measure of any one component. Nonetheless, selecting testing strategies to assure viral safety inherently requires that choices be made about what to test for (breadth) and how sensitive the tests should be (combination of inherent method sensitivity , any potential biological amplification, and sample preparation and size). Viral risk assessments , process knowledge (including inputs), and testing methods each inform, and are informed by, the others, resulting in a critical triad on which viral safety is established. (See Fig. 10.3).

Fig. 10.3
figure 3

Contributions to viral safety assurance

A rational strategy to assure biosafety then incorporates these three elements, risk assessment, process knowledge (including inputs), and the analytical methods used to detect potential agents. In the following section, we discuss risk assessment and process choices as a means of building context around the viral safety margin, and just how the analytics contribute. We conclude by reflecting on the regulatory and business cases for adopting new analytical approaches.

10.3.1 Elements of Risk Assessment and How Process Choices Mitigate Viral Safety Risk

Although there is a tendency to want to take a checklist approach to testing for biosafety in vaccines, and compendia and regulations support this inclination, such an approach is neither suitable nor scientifically supportable. Which agents may be adventitious in a particular vaccine production system or platform technology depends entirely on a large number of factors. These factors must be considered rationally in deciding testing strategies. In addition, particularly in an era of Quality by Design (QbD) , an assessment of risks should be undertaken to guide this decision-making process. Factors that should weigh into the considerations of testing strategies include, but are not limited to, those found in Table 10.3.

Table 10.3 Factors relevant to choosing a viral safety testing strategy

In addition to the factors in Table 10.3, the risk assessment process needs to take into account the likelihood of an agent to be present, the likelihood of its detection, if present, and the severity of the impact, if not detected. Not only is the likelihood of the presence or absence of a specific agent to be considered, but also the impact of its presence is considered during risk assessment. Highly pathogenic viruses, even if unlikely to be present, could be of higher concern than nonpathogenic agents, even if likely to be present. Furthermore, the detectability of a particular contaminant must also be considered, with agents unlikely to be readily detected potentially being of higher concern than those that are readily detectable. This aspect is discussed further in the following section.

10.3.2 Testing in the Context of Different Risk Scenarios

We have already alluded to the gap in breadth of detection of conventional methods. Thus, agents could be present in cultures that were not expected and for which tests were not designed. The resolution to this gap is to incorporate assays with broader detection capability, either to replace or supplement existing methods.

We have also reflected on the existing expectations for sensitivity , and the relative lack of an enduring, well-described rationale on acceptable safety margin for viral vaccines. By viral safety margin, we mean the excess capacity of a process for assuring any potential contaminant is removed, inactivated, or simply not present beyond what was present originally (if any). Considering viral clearance validation as an example, if an endogenous viral contaminant is present at such a level that virtually every dose of vaccine produced could contain particles, then viral clearance would be necessary to clear the virus particles to a level that establishes that statistically only a vanishingly small number of doses could potentially be contaminated in a whole lot. Often the regulatory expectation is that this margin-of-safety reduces the probability to less than 1 in a million doses being likely to contain a particle, for instance, in the illustration provided in ICH Q5A (ICH 1999). This excess capacity of a process to clear viruses that might be present (e.g., of residual live virus in the case of inactivated viral vaccines or of endogenous viral particles in the case of recombinant vaccines made in rodent cells) is the viral safety margin. Viral clearance validation or validation of inactivation procedures for inactivated vaccines are intended to establish such viral safety margins. We find this concept useful in thinking about the needed sensitivity of viral detection methods to ensure freedom from adventitious agents and consider whether conventional or novel methods can attain such, as discussed below.

Models can help build some of the necessary quantitative context, and would take into account the load from materials and even previous steps, ability to replicate in the manufacturing system, ability to be cleared or inactivated, and proportion of the culture used to make a final dose. Output of a model could include doses of vaccine per infectious unit of the potential virus with specified growth characteristics. An intermediate value in the calculation, infectious units/mL of culture, could be compared with the known sensitivities of the conventional detection method as well as the novel method, allowing some visibility into whether the limits of detection or breadth of detection of either analytical method really provides a meaningful viral safety limit compared with that contributed by the manufacturing process and controls. The challenges with such a modeling approach become obvious—many values are required for which solid experimental data may not be available. Clearly, recalibration would be necessary, requiring another assumption, to convert from infectious units to viral genome copy equivalents to enable these comparisons. On the other hand, simply wrestling with these gaps in knowledge could lead to hypotheses for experiments or alternative approaches to support a more scientific rationale for the viral safety margin, as well as inform the risk assessment, thus applying the principles of QbD . Others have offered approaches to viral risk assessments that incorporate some of these features (Gregerson 2008a, b; Tagmyer 2012), although they do not focus on the quantitative evaluation of alternative detection methods or suggest which methods might be best to use as an outcome of the assessment process.

Some quantitative aspects of such a model, e.g., on the detectability of an adventitious agent introduced by a raw material during production at the stage of cell culture would require actual data or valid assumptions. These would include, but not necessarily be limited to: the volume of the raw material and the culture to which it is added, concentration (titer) of the agent (if nothing detected by testing the bulk raw material, then assume LOD of detection method), log reduction value (LRV) based on validation data of any inactivation method applied to the raw material before introducing it into the cell culture, ability to replicate in the culture and burst size or amplification factor, length of culture period (how many replication cycles could occur) and length of replication cycle of agent, LRV afforded by any handling/storage of specimens taken for adventitious agent testing of the culture or downstream processing before specimen collection occurs, volume of the culture that results in a dose of vaccine, the LOD of the detection methods (conventional and novel), viral safety margin one is trying to achieve, and so forth.

We are confident that mathematical models will demonstrate that testing alone does not establish the viral safety margin, and a broader range of detectability is far more important in assuring absence of a catastrophic breach in the manufacturing process than rigidly adhering to arbitrary and nonstandardized approaches to sensitivity , based on conventional tests. This is the only way that a rational and scientifically based strategy can emerge, and not result in simply adding each new test or technology to an ever-increasing list of tests.

It is problematic to continue adding to the ever-growing list of tests for a number of reasons. The amount of sample needed for QC testing and reserve specimens would be ever increasing and often, more complicated, as different tests need differing material (e.g., cells, fluids) to be sampled, as well as processed, handled, and stored differently. It will become fiscally unsustainable when manufacturing needs to be scaled-up simply to account for QC samples (as is sometimes the case for Phase 1 clinical lots), in addition to the costs of developing, validating, and conducting all of the required testing. The more tests that are run, the more likely that a false positive result will be observed, if each test is validated to a confidence level of 95 % (i.e., then statistically 5 % of tests could be falsely positive by chance). And finally, and importantly, discordant results on tests that measure the same parameter (i.e., detection of viral contaminants) could become a QA/QC quagmire. Manufacturers would need to prospectively determine an algorithm not only for validity criteria within a single test, but also for which test to “believe” or what testing strategy would need to be undertaken to resolve the matter in the case of discordant results between tests for the same measure. And regulators, as well as the public, could lose confidence if they see such discordant results, even if carefully investigated and decided upon with such prospectively identified algorithms by the manufacturer. Thus, we propose that a new paradigm is needed, based on a broader platform for viral safety assurance that incorporates viral risk assessment, process and input controls, as well as improved analytics.

10.3.3 Making the Regulatory and Business Case for New Analytical Approaches

Both the regulatory and business cases for adopting novel methods is arguably based on the perceived gap in safety assurance. If the gap is perceived as large, then the investment in resources for developing, validating, implementing, and running novel methods is relatively easy to defend.

Currently a gap in viral coverage of existing testing is recognized, particularly in the case of novel cell substrates and biologically derived raw materials. Emerging methods have the potential to not only close the gap for unexpected viruses, but also arguably to provide coverage for the expected viruses. The challenge for manufacturers, contract testing laboratories, and regulatory bodies will be to assess the credibility of negative results—that is, how much assurance of lack of contamination do they really provide—since positive results presumably will be evaluated with orthogonal confirmatory approaches. Studies designed to carefully interpret negative results necessarily include sets of controls that increase the cost, and sometimes the complexity of sampling, and most assuredly the complexity of data interpretation and establishment of specifications. Specifications themselves may have to allow certain levels of signal that can be interpreted as noninformative or noncompelling as an indicator of contamination with a certain accepted level of confidence.

Biopharmaceutical manufacturers that use contract services for novel methods face the challenge of potentially not having in-house technical expertise on the methods, while still owning liability for oversight or incomplete or incorrect analysis. Vaccine developers also incur risk to development program timelines while investigating false positive results, risk of unwarranted comfort in the face of false negative results, and perhaps uneven regulatory expectations (e.g., if one regulatory region embraced particular new methods more readily or more rapidly than others, as has been seen with PCR for mycoplasma detection). Importantly, as both the technology landscape and perceived risks evolve over time, biopharmaceutical manufacturers may be faced with the significant “change control” issues—for instance that might be presented by updated databases or improved detection limits. The business risks for biopharmaceutical projects in early development are also quantitatively different than those for legacy products with strong safety records (and which often supply essential public health needs, sometimes as sole sources).

Businesses that develop and provide these services face the challenges of a rapidly evolving technology landscape, acceptance/understanding by customers and regulators, and sustaining focus over timescales relevant to biopharmaceutical development and licensure, and undoubtedly intense cost pressure.

Businesses and regulatory agencies face a shared dilemma when incorporating novel approaches for legacy products. Neither the methods nor the landscape of potential signals are fully understood yet. Prudent review of method development, surveys of materials, and robust courses of action for investigating potential positive hits in NAT -based tests will be needed to avoid potential risk that the public might lose access to or confidence in critical disease-preventing or disease-treating vaccines.

10.4 Summary

In summary, this chapter reviews the principles of how the current and routine tests detect adventitious agents, and reviews how novel and emerging methods differ in their detection principles. These facets may permit novel methods to emerge to supplement, refine, or replace the routine methods. We have suggested a framework for risk assessment to assure biosafety in vaccines and suggested quantitative modeling to help crystallize thinking about the place of testing, either routine or novel, in this assurance. We assert that testing for adventitious agents should not be the sole basis on which product biosafety is assured. Appropriate sourcing and quality control of raw and starting materials, adherence to principles of Good Manufacturing Practices, including environmental and personnel monitoring and process validation, and finally, testing as verification are the package needed for maximal assurance of biosafety. Thus, a pathway forward to a new paradigm for adventitious agent testing exists in which detection of a broader array of potential adventitious agents might be included in the testing, with adequate sensitivity to provide the needed assurance of verification that there has been no catastrophic breach, in the context of the overall process, design, and adherence to cGMP. Furthermore, it is our hope that we may be able to implement the 3 Rs policy to reduce, replace, and/or refine the use of animals in product safety testing, at the same time that we provide greater assurance of the biosafety of vaccines.