Discovery of Variants Underlying Host Susceptibility to Virus Infection Using Whole-Exome Sequencing
- 1.7k Downloads
The clinical course of any viral infection greatly differs in individuals. This variation results from various viral, host, and environmental factors. The identification of host genetic factors influencing inter-individual variation in susceptibility to several pathogenic viruses has tremendously increased our understanding of the mechanisms and pathways required for immunity. Next-generation sequencing of whole exomes represents a powerful tool in biomedical research. In this chapter, we briefly introduce whole-exome sequencing in the context of genetic approaches to identify host susceptibility genes to viral infections. We then describe general aspects of the workflow for whole-exome sequence analysis together with the tools and online resources that can be used to identify and annotate variant calls, and then prioritize them for their potential association to phenotypes of interest.
Key wordsHost genetics Antiviral immunity Exome Whole-exome sequencing Sequence alignment Read depth Variant calling Variant annotation Gene annotation
1.1 Value and Genetic Approaches to Identify Host Susceptibility Genes to Virus Infection
A characteristic feature of human infections , including virus infections, is that just a proportion of exposed individuals develop clinical disease. Even during the 1918 influenza pandemic, the more recent human immunodeficiency virus (HIV) epidemic or severe acute respiratory syndrome coronavirus (SARS-CoV) pandemic, only a proportion of individuals succumbed to infection [1, 2]. On the contrary, widespread pathogens that are innocuous for the most of the population, such as herpes simplex virus type 1 (HSV-1), can be fatal only to a very few . It is now well established that host genetic variation is an important component of the varied onset, severity, and outcome of infectious disease. Such data have provided important insights into the pathogenesis of virus infections shedding light into antiviral mechanisms required for host defense.
Definition of terms (in alphabetic order)
A set of alleles that commonly segregate together and are defined as regions of extended linkage disequilibrium, which in humans is often up to 100 kb in length.
Insertions and deletions in a genome; the second most common type of variation after SNPs.
Minor allele frequency (MAF)
Refers to the frequency at which the second most common allele occurs in population.
Describes the proportion of individuals with a mutation or risk variant who have the disease. Incomplete penetrance is said when individuals carrying pathogenic mutations manifest no disease phenotype.
Allele present with MAF <1% (PMID: 19293820)
Single nucleotide polymorphism. Variation of a single nucleotide base, with the minor allele present in at least 1% of alleles in the population.
Single nucleotide variant. Minor allele frequency undefined.
The sequencing of the human genome and the international HapMap project [9, 10, 11] led the way to Genome Wide Association Studies (GWAS) . This approach does not require a prior hypothesis. Using large well-characterized cohorts of cases and controls, the whole genome is interrogated with a large set of genetic variants to possible association between a variant and the disease trait. One of the most remarkable successes of GWAS in infection diseases was the identification of IFNL3 variants in association with the clearance of hepatitis C virus (HCV) following treatment (ribavirin and IFN-α) [13, 14, 15] or with spontaneous HCV clearance [16, 17], highlighting the importance of IFN-λ3 signaling in innate control of HCV .
GWAS applied to other viral infections have confirmed a major role for HLA genes in host susceptibility against HIV, Dengue and hepatitis B viruses and identified several new risk loci [19, 20, 21]. However, except for HCV mentioned before, non-HLA loci often span numerous linked genes and have modest effect size challenging their identification. Interestingly, these loci seem to behave in a pathogen-specific fashion, possibly delineating host-pathogen interactions that are specific to a given virus infection.
1.2 Power and Constraints of Whole-Exome Sequencing
In the past few years, the advent of next-generation sequencing technologies (NGS)—such as whole-exome sequencing (WES)—has revolutionized the biomedical field, including the discovery of many new mutations in patients with unexplained infections often seen at the immunodeficiency clinic [22, 23, 24]. WES provides a one-step simultaneous interrogation of virtually all exonic and adjacent intronic sequences, which has been remarkably successful both in a diagnostic setting sequencing and as a discovery tool (research exome sequencing) [25, 26].
These studies have been most effective for the discovery of rare, high-penetrance protein-coding variants for presumed monogenic disorders. A recent report counted that out of about 300 primary immuno-deficiencies characterized at the single gene level, close to 1/3 have been identified by NGS in the past 5 years . WES discoveries have provided fresh insights into the mechanisms that control the development, function, and regulation of immune cells during response to infection (recently reviewed in [26, 28]). Notably, they have highlighted (1) pathways that are required for general protection against infection, generally involving genetic block in the T/B-lymphocyte differentiation program or result in absence of specific immune cells, and (2) pathways that are required for response to narrow groups of pathogens, somewhat reminiscent of infection-specific risk loci mapped by GWAS. An example of the latter was the discovery of compound heterozygous mutations in IRF7 in a child suffering from life-threatening influenza . Each parent was heterozygous for a single mutated allele, indicating autosomal-recessive segregation for the IRF7-deficiency. Detailed biochemical analysis indicated that both alleles were loss-of-function mutations, consistent with the mode of inheritance. Mechanistically, IRF7-deficiency was linked to both, lack of IFN-α production in the patient’s plamocytoid cells challenged with influenza virus and lack of intrinsic anti-viral immunity in patient-specific fibroblasts or pulmonary epithelial cells derived from induced pluripotent stem cells (iPSC). This study represented the first demonstration of a genetic cause for severe influenza in humans and may well pave the way for the discovery of other influenza susceptibility genes in the IRF7 pathway, akin to mutations in the TLR3-pathways underlying HSE.
The example above illustrates critical requirements for the successful application of WES , including variant prioritization and variant validation. The study design requires a substantial body of previous knowledge about the phenotype including the prevalence in the general population and the penetrance to help in surmising the mode of inheritance [27, 30]. This will dictate the selection samples (see Note 1 ). For situations in which there is a single affected case and no family history, sequencing the unaffected parents (as for IRF7-deficiency) permits efficient discovery of de novo mutations and compound heterozygous genotypes. The availability of multiple families with very similar clinical phenotypes substantially increases power for gene discovery.
However, prioritization of disease-causing variants by WES remains one of the main challenges due to the sheer number of variants found in individual exomes. The exome has been defined traditionally as the sequence encompassing all exons of protein-coding genes in the genome and covers between 1 and 2% of the genome [31, 32, 33]. Yet this portion houses 85% of the known disease causing variants [34, 35]. An individual exome typically harbors thousands of variants, compared to a reference genome, which are predicted to lead to nonsynonymous amino acid substitutions, alterations of conserved splice site residues, or small insertions or deletions. As presented below, various methods exist to identify which variants deleteriously affect the function of individual proteins. However, each genome is thought to harbor about 100 genuine loss-of-function variants with about 20 genes completely inactivated [36, 37]. Hence, rigorous criteria, including the absence of the candidate variant genotype in individuals without the clinical phenotype together with robust experimental validation, have been proposed to validate disease-causing variants . Whereas study design and experimental approaches need to be developed in a case-by-case situation, below we will present the reagents and methodology for the discovery of and validation of candidate genetic variants in a typical exome-sequencing pipeline.
In addition to DNA samples from cases, their families, and the appropriate controls, the materials required for WES are a well-annotated reference genome, whole-exome capture DNA libraries, and computing facilities.
2.1 Annotated Reference Genome
The human reference assembly defines a standard upon which other whole genome studies are based. The last build of the human reference genome provided by the Genome Reference Consortium reports ~3 × 109 bases having coding and noncoding sequences. The exome is defined as all the exons for the 20,000 protein-coding genes in the human genome and all the exons pertaining to microRNA, small nucleolar RNA, and large intergenic noncoding RNA genes . This information is not static and projects such as GENCODE  and RefSeq  continue to provide comprehensive annotation of both protein-coding genes and noncoding transcripts. The last assembly of human reference genome (GRCh38) can be accessed via the European Bioinformatics Institute and the Wellcome Trust Sanger Institute (Ensembl)  or the University of California Santa Cruz (UCSC)  genome browsers.
2.2 Whole-Exome Capture Library
Exome capture essentially consists of the steps of fragmenting a DNA sample, hybridizing the DNA to complementary oligonucleotide baits whose sequence has been designed to hybridize to exon regions. After binding to genomic DNA, these probes are pulled down and PCR amplified through the addition of adapters, allowing exon regions to be selectively sequenced. The most common and efficient strategies are in-solution capture methods offered by Roche/NimbleGen’s SeqCap EZ Human Exome Library and Agilent’s SureSelect Human All Exon. Several publications have compared the specificity and sensitivity of these platforms [44, 45, 46]. The NimbleGen’s kit has the greatest bait density of any of the platforms and uses short (55 − 105 bp), overlapping baits to cover the target region . This approach has been found to be an efficient method to cover the target region, sensitively detect variants and has a high level of specificity. Indeed, NimbleGen’s kit shows fewer off-target reads than other platforms . Importantly, this bait design has been found to show greater genotype sensitivity than the other platforms in difficult to sequence regions, such as areas of high GC content . The Agilent’s kit is the only platform to use RNA probes. The baits are longer than those used in NimbleGen’s platform (114 − 126 bp) and the corresponding target sequences are adjacent to one another rather than overlapping. This design has been found to be good at identifying insertions and deletions (indels), because longer baits can tolerate larger mismatches .
2.3 High-Performance Computing Facility/Network for Data Storage and Maintenance of Pipelines for WES Analysis
Commonly used tools and weblinks for whole-exome sequence data analysis pipeline
Short read mapping
Manipulate NGS data (mark duplicates, merge files)
Variant annotation: (1) Coding effect predictions
Variant annotation: (2) Conservation
Variant annotation: (3) Gene-level
Variant annotation : (4) integrative
3.1 Raw Data Quality Control (QC) and Preprocessing
Description of commonly used file formats in WES workflows
FASTQ file (.fastq)
Text file that stores nucleotide sequence and quality score for downstream analysis. There are typically four lines in a FASTQ file: (1) sequence identifier initialized “@”; (2) biological sequence of nucleotide reads (ACTG); (3) sequence identifier initialized “+”; (4) quality score of corresponding sequencing read, which is coded with ASCII characters.
Sequence alignment/map (SAM) file (.sam)
Text file that stores alignment information of short reads to reference genome. The SAM file contains multiple lines including a header initialized “@” and multiple lines for the sequence alignment.
Binary alignment/map (BAM) file (.bam)
Binary file (stored in a format that is only computer readable) containing the same information as the SAM file, the content of which has been compressed to reduce storage disk space and increase performance.
Browser extensible data (BED) file (.bed)
Tab-delimited text file that consists of several lines each representing a single genomic region, such as an exon. BED files provide the coordinates of those regions including chromosome, start and end positions, and additional fields can be added.
Variant call format (VCF) file (.vcf)
Text file containing meta-information lines (i.e., file format, date, or other information about the overall experiment), a header line naming the columns (chromosome #, position, ID, reference allele, alternative allele, quality, filte, infor), and then data lines each containing information about a position in the genome. It is a standardized text file format for representing SNP, indel, and structural variation calls.
3.2 Sequence Alignment Mapping
After raw data QC and preprocessing, the next step is to map the reads to the reference genome. This is arguably the most crucial step and most time-consuming operation of most WES analysis pipelines. The computational challenge resides in finding an alignment algorithm that is tolerant to imperfect matches, where genomic variations may occur, while being able to align millions of reads at a reasonable speed. To achieve high-speed most alignment algorithms are based on an effective compression algorithm, the Burrows–Wheeler Transformation (BWT) . Many short-read aligners have been developed using this method: Bowtie , Bfast , Mosaik , and BWA . They vary a lot in speed and accuracy, which are likely to affect the identification of structural variations and influence variant calling. BWA is the most common choice of WES alignment . It allows gapped alignment, using very little memory. It performs separated alignment on both strands of a paired-end lane, in multi-threaded execution, unifying results in a single mapping file in the Sequence Alignment Map (SAM) format .
3.3 Post-Alignment Processing
To enhance the quality of the alignments for more accurate variant detection, the pipeline carries out three “cleanup” procedures. They consist of read duplicate removal, base quality score recalibration (BQSR), and indel realignment. A final, intermediate step provides important metrics to assess the quality of the data.
3.3.1 Read Duplicate Removal
Many of the reads from massively parallel sequencing instruments are identical—same sequence, start site, and orientation—indicating PCR artefacts . These duplicates may introduce a bias in estimating variant allele frequencies, thus it is advisable that they are removed prior to the variant calling. Programs such as the function rmdup from SAMTools  or PicardMarkDuplicates integrated in Picard Tools  apply optimal fragment-based duplicate identification and provide unique identifiers for each read group, i.e., the set of reads generated from a single run of an instrument. This allows minimizing of experimental noise, reducing the number of false calls and improving the accuracy in the search of the variants.
3.3.2 Indel Re-Alignment
Small insertions or deletions (Indels) in coding regions have been strongly associated with human diseases but accurate Indel calling remains difficult [60, 61]. The local realignment around Indels is an important step. This process searches a consensus alignment among all the reads spanning a deletion or an insertion or both (1) to improve Indel detection sensitivity and accuracy, and (2) to reduce variant false calls due to misalignment of the flanking bases. The alignment is improved by increasing the number of sequences in their local context. The program Haplotype Caller from GATK offers an efficient solution to Indel detection by generating local de novo assembly of aligned reads prior to Indel calling, improving Indel detection . As presented in Subheading 4, the HaplotypeCaller is capable of calling variants and indels simultaneously, which improves Indel detection while producing more accurate variant calls.
The per-base quality scores (Phred-score), which convey the probability that the called base in the read is the true sequenced base , are quite inaccurate and co-vary with features like sequencing technology , machine cycle, and sequence context. These inaccurate quality scores propagate into faulty SNP discovery . BQSR is a process in which machine learning tools are applied to model these errors empirically and adjust the quality scores accordingly. One of the most commonly used BQSR programs is BaseRecalibrator from the GATK suite, which takes alignment files and for each unknown base, a re-calibrated quality score is calculated to be used for variant calling. Recalibrated scores better reflect the empirical probability of mismatches to the reference genome, and by doing so provide more accurate quality scores [48, 62].
Biases in sample preparation, sequencing, genomic alignment, and assembly can result in genomic regions lacking coverage (i.e., gaps) or in regions with much higher coverage than theoretically expected. Hence to evaluate the quality of data to discover variants with reasonable confidence, two important metrics are the breadth and the depth of coverage of a target genome. Breadth of coverage denotes the percentage of bases that are sequenced a given number of times. Depth of coverage represents the number of reads that align at a given position, which is often quoted as average raw or aligned read depth. For example, a genome sequencing study may sequence a genome to 50× average depth and achieve a 95% breadth of coverage of the reference genome at a minimum depth of ten reads. The flagstat command from SAMtools  or DepthOfCoverage from GATK [48, 62] provides the calculation of the fraction of reads that successfully mapped to the reference, with number and percentages of the read mapped and unmapped.
3.4 Variant Analysis
Databases of human genetic variation
Weblink and description
Combined annotation dependent depletion database (CADD)
Catalog of precomputed scores for all possible SNPs or small Indels of the reference genome and the 1000 Genomes obtained by combining 63 annotations (e.g., SIFT, GERP, others) through a machine-learning framework.
Single nucleotide polymorphism database (dbSNP)
Broad collection of SNPs and Indels submitted by investigators worldwide and curated by NCBI.
Human gene mutation database (HGMD)
A catalog of all published gene lesions responsible for human inherited disease.
Exome aggregation consortium (ExAC)
Catalogue of exome variation in 60706 individuals some with adult onset diseases (Type 2 Diabetes, schizophrenia) patients presenting severe pediatric diseases have been excluded.
1000 Genomes project
Catalogue of genome variation with at least 1% frequency in the population based on whole-genome sequencing of 2504 individuals from 26 populations (including study cohorts for adult onset diseases).
NHLBI exome sequencing project (ESP6500)
Catalogue of variation within 6500 exomes from well-phenotyped populations from various projects, e.g. Severe Asthma Research Project; Pulmonary Arterial Hypertension population; Acute Lung Injury cohort; Cystic Fibrosis cohort.
3.4.1 Variant Calling
Variant calling implies identifying the sites in the sample that statistically differ from the reference genomic sequence. Single nucleotide polymorphisms (SNPs) and Indels are detected where the reads collectively provide evidence of variation (see Note 2 ). As with alignment tools, several open source tools are available to identify a high-quality set of variants in WES projects . SAMtools  and GATK HaplotypeCaller [48, 62] are widely used in genomic variant analyses. HaplotypeCaller has been found to have high sensitivity for SNP detection and outperform other pipelines for Indels [50, 63]. HaplotypeCaller runs a “reading window” along the reference genome, comparing the reference to sequenced reads counting mismatches and Indels. These variations from the reference are used as a measure of entropy, or disorder in the read data. If the level of entropy within the reading window surpasses a cutoff score (default value can be changed), the window is marked as an Active Region, which is inspected to generate the plausible haplotypes. Then, HaplotypeCaller uses a Bayesian statistical model for the calculation of the probability of the genotype, estimating the accuracy of the call with a score of Phred-like quality. The results are reported in a standard Variant Call Format (VCF) file.
3.4.2 Variant Annotation
Three major tools are used to classify variants functionally: SnpEff (SNP Effects) , VEP (Variant Effect Predictor) , and ANNOVAR (Annotate Variation) [66, 67]. SnpEff annotates variants based on their genomic location and predicts coding effects , as does VEP, a tool available from the genome browser, Ensembl . Besides annotating functional effects of variants with respect to genes, ANNOVAR has many additional functionalities, such as integrating information from up to 4000 different databases and external resources to annotate the variants . For SNPs , these include (1) calculating their predicted functional importance scores using SIFT (Sorting Intolerant From Tolerant)  and PolyPhen2 (Polymorphisms Phenotyping v2)  and (2) reporting their conservation levels by PhyloP (Phylogenetic P-values) [70, 71] and GERP++ (Genomic Evolutionary Rate Profiling) . The CADD (Combined Annotation Dependent Depletion) database is another useful external linked for deleterious prediction of a variant. The CADD score combines information from several resources to score both protein-altering and regulatory variants .
New tools are being developed for variant annotation that considers gene-level metrics (e.g., conservation at the gene-level, accumulation of mutational load) and provides more sensitive scoring of variants . GAVIN (Gene-Aware Variant INterpretation for medical sequencing) classifies variants as benign, pathogenic, or a variant of uncertain significance . The MSC (Mutation Significance Cutoff)  generates a quantitative score that provides gene-level and gene-specific phenotypic impact cutoff values above which a variant is considered pathogenic with 98% true positive detection rate.
To determine variant frequency, ANNOVAR links to external databases such as dbSNP database [77, 78] or the Human Gene Mutation Database  to identify the presence or absence of a variant (see Table 4 for commonly used databases of human genetic variation). Large-scale genomic studies such as 1000 Genomes Project , the US National Institutes of Health–National Heart, Lung, and Blood Institute (NIH-NHLBI), ESP6500 exome-sequencing project , and the Exome Aggregation Consortium [37, 81] have catalogued sequence variants from thousands of exomes and genomes, which serve as a valuable resource for allele frequency estimations. These resources are integrated in ANNOVAR, which can find the alternative allele frequency for newly discovered variants in a WES project. The GATK pipeline also integrates ANNOVAR as an external option for variant annotation and can use the tool VariantAnnotator, which is enriched with additional features such as gene set enrichment analysis for downstream analysis.
3.4.3 Variant Filtration
Low-quality variants are those including variants with low coverage, low quality, strand biased, as well as those mapping to low-complexity regions or incomplete regions of the reference genome . GATK uses machine learning algorithms (VQSR or variant quality score recalibration) to learn from each dataset what is the annotation profile of “good” and “bad” variants [48, 62]. The tool assigns scores (VQSLOD for variant quality score log-odds) which can be used to set the filtering of “bad” variants. There is tradeoff in the process in which increasing the specificity will decrease the sensitivity of the filtering. VQSR can be applied to SNPs or indels. The availability of in-house databases for WES variants obtained with the same sequencing technology and analysis pipeline is recommended to exclude variants resulting from systematic errors (see Note 3 ).
Under the assumption that common variants are less likely to cause disease than rare ones, it is important to set a minor allele frequency (MAF) threshold based on disease model of the study. A variant with a MAF greater than 1% is regarded as common; the remainder are considered rare or private to the subject or the kindred studied. Setting the MAF threshold at 1% is recommended, usually filters out over 70% of the variants .
3.4.4 Variant Prioritization
A deep knowledge of the clinical and cellular phenotype , the prevalence of the trait in the general population together with an understanding of the familial segregation are essential in the prioritization of gene variants. For example, a recessively inherited disease variant is likely homozygous whereas a dominant disease variant is heterozygous. In general, a dominant allele should be absent in a variant database based on healthy controls or exceedingly rare to allow for reduced penetrance. However, there can be exceptions to these rules. For instance, recessive disease variants can be compound heterozygous. In a cohort, the search for either identical variants or additional rare variants in the same gene can further strengthen the evidence for causality. Variants found in a gene in which other variants have already been associated with a certain phenotype are more likely to be associated with the same phenotype, although this is not always the case.
Segregation of the variant with disease status is another key criterion for variant prioritization. This requires appropriate WES control data obtained with the same method from healthy subjects, ideally of the same ethnic origin as the patients. In case of complete penetrance, the candidate disease-causing variants found in patients cannot be present in unaffected subjects. In case of incomplete penetrance, the situation is more complex because these hypothetical disease-causing variants can also be present in asymptomatic subjects, including unaffected subjects of the same pedigree.
At the gene level, it is reasonable to first review variants found in genes that participate in its related pathways . This is also true when a phenotypically similar disease exists, and related pathways are known. The HGPS (Human Gene Connectome) ranks genes by their biological distance to core genes (known to be associated with phenotype), and provides the distances and all possible biological connections between all pairs of human genes based on protein-protein interaction prediction [74, 84]. Genes can be mapped online to KEGG (Kyto Encyclopedia of Genes and Genomes) pathways  or REACTOME pathways . It is useful to find information about candidate genes-knockout phenotypes. For this, the Mouse Genome Informatics database enables queries for human–mouse disease and MPO (Mammalian Phenotype Ontology) connections using gene symbols as an input . Expression of candidate gene in the tissues or organs of interest is an important criterion for prioritization . GEO (Gene Expression Omnibus) profiles , the ExA (Expression Atlas , and the BioGPS gene annotation portals  are excellent resources for this purpose. Knowledge about protein structure, function, and interactions also can help rank candidate genes. The UniprotKB (Uniprot knowledgebase) collects information from several databases including curated protein sequences and structures with links to annotations of genomic variants . The STRING database and associated search tools  are powerful resources for identifying interacting partners of a candidate gene’s product or for identifying interactions between the products of a set of genes that bear functional variants. The ToppGene  and GeneMania  web portals are other resources that perform candidate gene prioritization based on the interactome.
3.4.5 Variant Validation
With all the tools available and new ones emerging monthly, variant filtration and prioritization are becoming more automated. A similar trend is also observed in other parts of variant analysis such as the detection and annotation. Regardless, a deep understanding of the biological questions being asked and the etiology of the disease being studied is crucial for properly choosing tools and parameters that suit a study the best.
Ultimately, variant validation requires experimental confirmation at the level of protein, cell and—if possible—animal model to establish causality. This necessitates solid knowledge of physiology and pathology of the phenotype at the study for the design of appropriate experiments relevant to the nature of the protein. The recent breakthrough of genetic manipulation of human-induced pluripotent stem cells , CRISPR genome-editing tools [96, 97] permits establishing the causal relationship between the candidate genotype and the clinical phenotype in relevant cell types  or organoids  representing relevant tissues, even for isolated cases.
Broadly, the mode of inheritance can be recessive, dominant, or X-linked. Recessive mutations are easier to identify by filtering for homozygosity, or compound heterozygous mutations. Dominant inherited mutations will be either inherited from one of the parents or be de novo mutations, in both cases dominant mutations should be absent in unaffected family members or matched unrelated controls.
Joint application of variant calling software to multiple samples is recommended to reduce false positive variants. We can also improve variant calling in regions with fewer reads by utilizing reads from multiple samples concurrently. This increases the confidence of any given variant and allele bias and strand bias are much easier to sort.
The evaluation of family trios can also eliminate low-quality variants as the majority of variants detected in the child and absent from the parents most likely result from sequence artifacts. Moreover, the accuracy of error detection and variant identification increases with the number of relatives and generations sequenced per family.
- 4.Dean M et al (1996) Genetic restriction of HIV-1 infection and progression to AIDS by a deletion allele of the CKR5 structural gene. Hemophilia growth and development study, multicenter AIDS cohort study, multicenter hemophilia cohort study, San Francisco City cohort, ALIVE study. Science 273(5283):1856–1862CrossRefGoogle Scholar
- 39.Consortium GR (2017) https://www.ncbi.nlm.nih.gov/grc/human
- 52.Andrews S (2010) FastQC: a quality control tool for high throughput sequence data. Available online at: http://www.bioinformatics.babraham.ac.uk/projects/fastqc
- 53.Burrows M, Wheeler DJ, (1994) A block-sorting lossless data compression algorithm. Technical report–CaliforniaDigital Equipment Corporation, Palo Alto, 124Google Scholar
- 62.Van der Auwera GA et al (2013) From FastQ data to high confidence variant calls: the genome analysis toolkit best practices pipeline. Curr Protoc Bioinformatics 43:11 10 1–11 1033Google Scholar
- 70.Siepel A, Pollard KS, Haussler D (2006) New methods for detecting lineage-specific selection. In: Apostolico A, Guerra C, Istrail S, Pevzner P, Waterman M (eds) Proceedings of the 10th international conference on research in computational molecular biology. Springer, Germany, pp. 190–205Google Scholar