Introduction

Pharmacogenomics (PGx) has much potential in cardiovascular medicine, with decreases in sequencing costs promising further translation to clinical care. Personalised medicine within the context of PGx advances refers to the increasing ability to stratify patients by genetic information to give each patient drugs, and dosages, to which they will be optimally responsive, at the optimal time, while minimising the risk of adverse drug reactions. It can further inform the optimal starting dose of a drug and guide safe up-titration.

While much work has been undertaken in PGx and there is sufficient evidence to recommend implementation of knowledge to alter clinical practice for prescribing several cardiovascular drugs (most prominently warfarin, clopidogrel, and statins), uptake remains limited to academic arenas, and professional bodies within cardiovascular medicine have not issued a firm position [1]. In addition to those clinical prescribing actions consistently recommended by various PGx guideline groups, there are other areas in which PGx data is earlier in development but shows enormous promise. An example is within cardio-oncology, where genomic variants have been associated with risk of chemotherapy-induced cardiotoxicity [2]. A further example is in drug-induced long QT, a rare but potentially lethal arrhythmic response to a wide array of medication.

However, to deliver such potential advances to the bedside and use contemporary scientific gains to improve patient outcomes, further work is needed. This includes building on the scientific evidence base and advancing PGx use in rational drug design and clinical trials, as well as using exciting newer tools such as polygenic risk scores and Mendelian randomisation (Fig. 1). Furthermore, to aid implementation, further technologic infrastructure is needed. Academia, direct-to-consumer companies, and healthcare systems and industry all have a major role to play if we are to fulfil the long overdue promise of bringing pharmacogenomics into the clinical realm. Ethical issues must also be addressed, and comprehensive economic assessments will inform ethical allocation of resources.

Fig. 1
figure 1

Overview of PGx pillars and tools in supporting translation to clinical prescribing

Pharmacogenomics in Clinical Cardiovascular Medicine

Intensive and collaborative research, particularly following completion of the Human Genome Project in 2003, has markedly increased our insight into and understanding of pharmacogenomics. To date, the US Food and Drug Administration (FDA) approved product labels of 298 drugs contain genetic information within them [3]. The majority of these drugs are indicated in oncology (n = 102 drugs) and include both somatic and germline pharmacogenomic biomarkers, followed by psychiatry (n = 35 drugs), infectious diseases (n = 30), neurology (n = 27), anaesthesia (n = 16), and cardiology (n = 15), not including warfarin) [3]. Moreover, the clinical relevance of pharmacogenomics has been assessed for 123 drugs to date by guideline-writing committees, with therapeutic recommendations developed for just over 80 of these drugs and collated together in the PharmGKB repository [4]. These assessments have generally focused on well-researched drug-gene pairs, and often derive therapeutic recommendations to guide clinical prescribing from evaluating the body of evidence, rather than a single pivotal trial.

The majority of these drug-gene clinical assessments have been undertaken by the Royal Dutch Association for the Advancement of Pharmacy-Pharmacogenetics Working Group (DPWG) and the Clinical Pharmacogenetics Implementation Consortium (CPIC), although the Canadian Pharmacogenomics Network for Drug Safety (CPNDS) and other professional societies also contribute [4]. Several drug-gene pairs have overlapping guidance produced by more than one society, although, on the whole, a high rate of concordance exists between DPWG and CPIC published recommendations [5]. Of the assessed drugs, pharmacogenomic therapeutic recommendations are currently available for 12 cardiovascular drugs (Table 1). It is clear that several of these drugs are frequently prescribed in multiple settings including primary care, as well as by cardiology services, and so are relevant to a broad range of physicians.

Table 1 Cardiovascular drug-gene pairs with actionable therapeutic recommendations in a pharmacogenomics guideline

Established Cardiovascular Drug-Gene Associations

The evidence base underpinning drug-gene associations varies but, arguably for cardiology, is highest quality for warfarin, clopidogrel, and simvastatin. Thus, the pharmacogenomics of these drugs is briefly reviewed below; for further information about them, the reader is referred to their CPIC guidelines [8,9,10].

The main variants associated with warfarin response are rs9923231, a reduction-of-expression variant in VKORC1 (vitamin K epoxide reductase complex subunit 1) that encodes warfarin’s pharmacodynamic target and requires lower warfarin doses; reduction-of-function (ROF) variants in cytochrome P450 2C9 (CYP2C9, e.g. *2, *3, *5, *6, *8, *11) that decrease the metabolism of the more potent S-warfarin enantiomer leading to lower dose requirements; and rs2108622 (*3) in CYP4F2, which decreases vitamin K metabolism resulting in higher levels of reduced (active) vitamin K and so higher warfarin dose requirements [8, 11, 12] Most warfarin pharmacogenomics trials have tested the impact of a genotype-informed warfarin dosing algorithm on time in the therapeutic international normalised ratio range [13]. However, GIFT, the largest warfarin pharmacogenomics randomised controlled trial (RCT, n = 1597 analysed) to date, investigated a clinical composite outcome of INR ≥ 4, major bleeding, death, or venous thromboembolism, and found a significant decrease in the primary endpoint in the genotyped arm relative to the clinically guided arm (10.8% vs 14.7%, p = 0.02) [14].

Real-world implementation in anticoagulation clinics has shown warfarin pharmacogenomics to be feasible and beneficial [15]. Important considerations for warfarin pharmacogenomics include patient ethnicity to avoid misclassification of CYP2C9 metaboliser status, utilisation of pre-emptive or point-of-care genotyping as RCT protocols specified early use of genotyping algorithms (e.g. days 1–5 or 1–11 after starting warfarin), and the wider landscape of oral anticoagulation [13, 14, 16]. Clearly, the uptake of direct acting oral anticoagulants (DOACs) has correspondingly reduced the number of patients starting warfarin [17]. However, genetic analysis of ENGAGE AF-TIMI 48 trial data indicates that patients carrying warfarin- sensitising alleles (in VKORC1 and CYP2C9) have an increased risk of bleeding on warfarin compared to edoxaban, while bleeding risk does not differ between drugs in patients without warfarin-sensitive alleles [18]. This suggests that oral anticoagulation stratification based on VKORC1/CYP2C9 variants may provide an overarching anticoagulation prescribing strategy that could be clinically and cost-effective, but requires prospective testing [19].

Clopidogrel pharmacogenomics focuses on ROF variants in CYP2C19 (e.g. *2, *3) that reduce its biotransformation from prodrug into its active thiol metabolite and leave increased residual platelet reactivity [20]. The clinical indication for antiplatelet therapy appears important for clopidogrel-CYP2C19 with greater utility considered for percutaneous coronary intervention (PCI), particularly after an acute coronary syndrome, given the higher baseline risk of major adverse cardiovascular events (MACE) including stent thrombosis in these settings compared to other lower risk indications [9, 21]. Two recent RCTs, POPular and TAILOR-PCI, have been reported. POPUlar showed CYP2C19-informed antiplatelet stratification (CYP2C19 ROF carriers received ticagrelor or prasugrel, and non-carriers clopidogrel) was non-inferior to standard treatment with ticagrelor/prasugrel for net adverse events (p < 0.001 for non-inferiority) and reduced bleeding (p = 0.04) [22]. However, TAILOR-PCI narrowly missed its primary MACE endpoint comparing CYP2C19-informed antiplatelet stratification to standard care with clopidogrel (4.0% vs 5.9%, p = 0.06), but did observe that genotyping reduced both MACE when multiple events per patient were considered (p = 0.01) and, in post hoc analysis, MACE within the first 3 months (p = 0.001) [23]. Multisite assessment has demonstrated overall feasibility of implementing CYP2C19 and reported higher MACE in CYP2C19 ROF carriers prescribed clopidogrel versus alternative therapy [24, 25].

The rs4149056 (p.V174A) missense variant in the solute carrier organic anion transporter family member 1B1 (SLCO1B1) reduces the intrinsic activity of its encoded hepatic influx transporter, OATP1B1. This variant has been correlated with increased statin exposure, particularly for simvastatin acid, and consistently associated with simvastatin muscle toxicity ranging from mild events to severe myopathy and rhabdomyolysis [26,27,28,29]. Although a strong pharmacokinetic association is recognised between atorvastatin and SLCO1B1 rs4149056, its influence on atorvastatin muscle toxicity remains less clear than for simvastatin [30, 31]. In contrast to warfarin and clopidogrel, no prospective statin pharmacogenomics RCT has yet been reported, although the I-PICC RCT is underway [32]. Contra arguments to clinical utility maintain that prescribing statins other than simvastatin, or merely starting with low-dose simvastatin, could obviate any advantage of genetic testing. This approach, however, may underuse simvastatin or result in reluctance to up-titrate simvastatin appropriately. Furthermore, as genetic information is increasingly accessible, this testing would likely be included in panel, rather than stand-alone, pre-emptive testing.

Emerging Cardiovascular Pharmacogenomics

Cardio-oncology is a relatively new but expanding cross-disciplinary subspecialty, reflecting the rapid development of novel therapeutics, the increasing proportion of patients with cancer surviving long term, and the growing recognition of chemotherapeutic-associated cardiotoxicity [33]. Notably, anthracyclines are among the most commonly used chemotherapeutics for paediatric and adult leukaemia, lymphoma, and certain solid organ malignancies (e.g. breast, lung cancer), yet their overall effectiveness is limited by anthracycline-induced cardiotoxicity (ACT). There is high interindividual variability in susceptibility to and severity of ACT, ranging from asymptomatic cardiac function in up to 57% of exposed patients to congestive heart failure in 16–20% [34]. The CPNDS has developed clinical guidance recommending testing rs2229774 (S427L) in retinoic acid receptor gamma (RARG), rs7853758 (L461L) in SLC28A3, and rs17863783 in UGT1A6 in all paediatric cancer patients with an indication for doxorubicin or daunorubicin [34]. The underlying evidence is based on a limited number of observational genetic studies, without functional validation, rather than RCTs, is graded as moderate, and suggested management recommendations for patients at high ACT risk which include increased surveillance, liposomal anthracycline formulations, and alternative chemotherapy where possible [34,35,36,37].

There is also growing recognition that rare variants in cardiomyopathy-associated genes, particularly truncating variants in titin, are associated with increased risk of chemotherapeutic-associated cardiotoxicity and especially ACT [38, 39]. Titin is an enormous protein, encoded by ⁓ 363 exons, expressed in cardiac and skeletal muscle [40]. Titin truncating variants (nonsense, frameshift, essential splice site) underlie 25% of familial cases of idiopathic dilated cardiomyopathy, but truncating variants that affect all transcripts are rare in the general population [40, 41]. The majority of dilated cardiomyopathy cases are inherited in an autosomal dominant manner; the exact pathogenesis of titin truncating variants is unknown but a dominant negative mode of action, albeit not through a poisonous-protein mechanism, is thought more likely than haploinsufficiency [42, 43].

Polygenic Risk Scores—Practical Applications

Chronic cardiovascular disease presents a large burden to society [44]. Risk stratification is essential to determine where intervention is advisable, as well as the potential impact of different interventions. Current risk scores, which incorporate clinical and laboratory-based risk factors, can identify individuals at high risk suitable for selective preventative strategies, e.g. prescribing cholesterol-lowering medications for reducing coronary heart disease risk [45]. Such clinical cardiovascular risk calculators fail to identify up to 40% of people who develop cardiovascular disease and utility is limited among young individuals [45, 46].

Cardiovascular disease is polygenic; therefore, a single variant is not informative for assessing disease risk. Instead, genetic risk loading conferred by a combination of variants can predict those at high risk. This genetic risk is most often assessed through polygenic risk scores (PRSs), a weighted sum of a number of risk alleles. PRSs calculating this cumulative genetic burden have recently been shown to correlate with case status in coronary heart disease (CHD) [47].

This year, a number of studies on the role of coronary artery disease (CAD) PRS for risk stratification have been published with conflicting evidence [48,49,50,51]. Studies by Marston et al. and Elliott et al. addressed the clinical utility of large genome-wide polygenic risk scores which incorporated more than 6 million single-nucleotide polymorphisms in CAD prediction when compared with clinical risk stratification tools [49, 50]. Both of these studies demonstrated lack of clinical utility when polygenic risk scores were added to pooled cohort equations. These studies join several other published studies that examined polygenic risk scores for CAD across a broad range of population samples [47, 52,53,54]. Conversely, studies by Marston et al. and Damask et al. which analysed the role of CAD PRS for stratification of patients with pre-existing CAD who were enrolled in the trials of the PCSK9 inhibitors alirocumab and evolucumab found that, among individuals in the placebo arms, a high CAD PRS was strongly associated with cardiovascular events, even after adjusting for baseline traditional risk factors [50, 51]. In the treatment arms, individuals at high genetic risk were found to derive the greatest absolute and relative benefit from PCSK9 inhibitors.

These latter studies provide evidence that a CAD PRS predicts cardiovascular events even in the setting of pre-existing CAD, independent from traditional risk factors, and that clinical application of a CAD PRS in this setting could be used to titrate intensity of low-density lipoprotein cholesterol-lowering therapy. A more recent study by Mars et al., which set out to test the utility of PRSs derived from large-scale genomic information for predicting first disease events in five diseases including CAD, showed that adding PRSs to clinical risk prediction revealed two patterns with implications for clinical utility [55]. Firstly, for early-onset CAD, the CAD PRS identified individuals missed by clinical risk scores, comprising 13% of the early-onset cases. Most cardiovascular risk calculators have been trained with data on middle-aged individuals, and their ability to identify people at risk for early-onset CAD is therefore limited. Improved identification of these high-risk individuals could allow for targeted preventive efforts, e.g. targeting cholesterol-lowering treatments or lifestyle medication may be particularly useful in individuals with a high CAD PRS. Secondly, the CAD PRS improved reclassification of some older individuals into a lower risk category. As age is an important risk driver in most cardiovascular risk calculators and can therefore lead to false positives in older age groups, CAD PRSs may potentially reduce overestimation of risk and subsequent overtreatment, reducing polypharmacy.

Evidence regarding the use of PRSs in other fields of cardiovascular disease, such as Long QT Syndrome (LQTS), is also inconsistent; this condition highlights the tension between predictive value of rare or monogenic variants versus polygenic risk prediction in a phenotype which can be caused by both. Comprehensive genome-wide association analysis has not identified single common variants strongly associated with drug-induced Torsade de Pointes [56]. However, a polygenic risk score of 61 common variants associated with baseline QT interval has been associated with drug-induced QT prolongation and Torsade de Pointes [57]. In one of the largest and most comprehensive such studies of LQTS, Lahrouchi et al. pooled 1656 LQTS cases, finding that aggregate common variants are associated with LQTS and may be a distinct genetic subtype of LQTS [58]. The authors combined 68 primarily common variants previously associated with population QT variation into a PRS and found that a higher risk score was associated with LQTS in both European and Japanese ancestry cohorts. They also showed that genotype-negative individuals (i.e. without an identified rare LQTS variant) have a higher PRS than genotype-positive LQTS cases. This genotype-negative group had a robust phenotype (average QTc approximately 500 ms). These results suggest that a significant proportion of LQTS disease burden is explained by common variation, and that a higher burden of common QT interval-associated variants increases risk of overt LQTS in patients considered genotype-negative. Conversely, a recent study by Turkowski et al. of 423 patients genotyped for 61 variants associated with LQTS found that the variability in QTc is impacted most by the patient’s rare LQTS-causative variant rather than the PRS [59]. The PRS explained < 2% of the QTc in this cohort of patients.

The practical applications of polygenic risk information to stratified screening or for guiding lifestyle and medical interventions in the clinical setting remain to be defined in further studies and must be compared with existing gold standard care. The majority of the cardiovascular risk score literature is limited to European ancestry populations, and few researchers note the limitations that confers in clinical application to a broader and more diverse population (which could then worsen health disparities) [51].

Cost-effectiveness

Cost-effectiveness analysis measures efficacy of different means to achieve a named goal at the same cost. Cost-utility analysis can be understood as a more refined cost-effectiveness analysis, in which the benefit of an intervention is quantified by a multidimensional unit, such as quality adjusted life years (QALYs), which estimates benefit in the form of number of years of perfect quality of health gained from an intervention. As QALYs are multidimensional, all pharmacogenomic (PGx) proposed interventions can be compared to other such interventions as well as differing health interventions to understand the benefit to the populace per economic unit of an available budget.

Methodologic barriers to such studies in PGx include challenges to assessing the cost of the testing itself, as well as cost of integration of the testing within the clinical pathway, assessment of benefits, assessment of utility, and comparison to other possible interventions. In addition to the difficulties of measuring the direct cost/benefit of genomic testing, the lack of uniform practice as to incorporation of these results in the clinical, diagnostic, and therapeutic pathway makes estimates of indirect costs highly variable [60]. Some of these costs could and should be quantified, such as the time in a clinical consultation needed to discuss genomic testing results, but some, such as the burden of variants of unknown significance on clinician resources and accountability would be extremely difficult to quantify.

It also raises the question of costs of upskilling a clinician force who do not normally use genetic data to interpret it appropriately in this limited setting. There would need to be resources in place to support this work as confusion can be anticipated. Where this support would be situated and what the cost of such a service would be is extremely unclear and therefore cannot reasonably be estimated with certainty. There is also downstream cost in needed intervention for any incidentally discovered information. For example, if a potential genetic predisposition to drug-induced long QT syndrome, a condition associated with lethal arrhythmias, is detected, in addition to the prescribing implications, this person may appropriately be referred to an inherited channelopathy specialist and have further investigations which are cost-generating, which would not have been undertaken in absence of this test (and in absence of phenotypic disease expression or family history) [61]. There may be long-term demonstrable economic benefits as a result of these tests if early detection of individuals at risk modifies adverse outcomes, but these would need to be quantified as well (and risk stratification and modification in such cases is highly debatable in the context of asymptomatic screening) [61]. Cost-utility benefit may shift substantially as the cost of genomic testing decreases—thus, analysis should be updated as the cost of genetic testing changes dynamically. Existing cost-effectiveness analyses are sensitive to the cost of genotyping and should thus be adjusted as costs of genetic testing decrease [62]. Furthermore, health economic studies may not be generalisable outside of the geographical context of the study as they are dependent on allele frequency in a given population, which can vary widely between nations [63].

While most of the PGx health economic studies to date have found pharmacogenomics to be cost-effective, with an increasing emphasis on cost-utility analysis, there is uncertainty in these estimates [64,65,66]. Health outcomes are difficult to measure robustly with a clinical intervention in absence of a randomised controlled (RCT) study. Since very few RCTs have been performed comparing PGx testing guided therapy to gold standard therapeutic intervention without PGx testing, there is uncertainty about clinical benefit, and this cascades down to further extrapolated analysis, including economic analyses [60, 65, 67]. Additionally, it is unclear how patients may react to confrontation with their genetic information and how this may interface with lifestyle choices and medication compliance (adding uncertainty to calculation of benefit) [68].

The role of the cost-utility estimate is to make healthcare spending decisions transparent and objective, quantifying the cost/benefit ratio in a way that can compare various potential different interventions, treating different conditions, in different ways, with the available monetary resources of the health system in question. This addresses a core ethical principle of distributional justice and is important as each spending decision will have an opportunity cost; the cost of not having spent the money elsewhere on a different health intervention. Though some assumptions and uncertainties are unavoidable in health economic estimates, stating these factors and accounting for them in uniform ways across analyses can help to make cost-effectiveness comparisons more robust and will add to transparency in resource allocation decisions, which promotes public trust.

Ethical Issues in Clinical Application

Though PGx holds the key to many proven and possible improvements in clinical care, as detailed above, there remain ethical concerns which must be addressed prior to broad clinical application. These include, principally, confidentiality and data sharing, as well as manifold concerns generated by the use of genetic tests for non-diagnostic purposes. While questions of confidentially and data sharing can arguably be mediated by existing consensus on use of genetic information, pharmacogenomics enters a new dimension as it is a proposed population level screening and would likely be navigated by panels of genes or single-nucleotide polymorphisms leading to a level of genetic identification within a population that has not manifested from other routine uses of genetic testing.

Furthermore, when considering genetic testing in the context of PGx, rather than diagnostic testing, this could be considered a screening test. Comparison with accepted criteria for diverse genetic and non-genetic screening tests is therefore reasonable. The Wilson and Jungner World Health Organization (WHO) screening criteria were published more than 40 years ago but remain the touch stone to mediate the appropriateness of diverse proposed population wide screening measures [69]. It would be difficult to argue that PGx testing currently fulfils these original or modified WHO criteria [70]; specifically encountering problems with pre-symptomatic testing, suitability of various tests available; acceptability to the population; agreed policy on whom to select for this testing, with a not yet clearly defined target population; equity in access; and integration with education, testing, clinical services, and programme management. None of these are out of reach or absolute barriers, but the ethical facets are conjoined with the regulatory and legal infrastructure and must be addressed holistically before PGx unfolds on a population scale.

There are also ethical arguments for implementation of PGx in specific contexts where high quality evidence concludes better outcomes for patients when PGx is used as compared with standard care. This falls under an obligation to minimise harm as medical doctors prescribing therapeutics. This argument hinges on solid evidence of harm in gold standard prescribing care in the absence of PGx use; one such example is evidence for in stent thrombosis on clopidogrel for high-risk ACS patients, where there is an effective alternative therapeutic available and an adverse consequence of not checking for poor metaboliser status is foreseeable.

These are just a few prominent threads of ethical discourse related to PGx and illustrate the need for unified guidance and proactive planning to address ethical barriers to population level PGx implementation. Ideally professional guidelines, as well as national legislation and regulatory bodies, should address all below listed ethical domains prior to expanded PGx use in cardiovascular medicine:

  1. 1)

    Confidentiality and genetic data in the context of PGx, with specific provision for forensic use and breaches of confidentiality to prevent harm to relatives.

  2. 2)

    Privacy and data protection, with anticipation of large data repositories and big data sharing. The role of possible collaboration with private industry should be addressed.

  3. 3)

    Informed consent for PGx panel testing for screening purposes.

  4. 4)

    Transparency in evolving reclassification of genetic variants with emerging evidence.

  5. 5)

    Responsibility in the interpretation and actioning of genetic variants identified through PGx testing.

  6. 6)

    Distributional justice in health economics of PGx testing.

  7. 7)

    Social justice in PGx testing with limited understanding of variants in persons of diverse and admixed ethnic backgrounds.

Direct-to-Consumer Testing

Direct-to-consumer (DTC) genetic testing emerged in the early 2000s and the industry has rapidly grown, particularly since late 2016 [71,72,73]. The majority of products target ancestry or paternity testing, although multiple health-related DTC products are available [72]. Concerns over the variable standards of analytical and clinical validity and advertising practices led the FDA to intervene in the USA, stating that health-related DTC tests constitute medical devices, and now, DTC tests of moderate-to-high risk medical purposes require FDA clearance [71, 74, 75]. Subsequently, four 23andMe DTC tests have received FDA marketing authorisation, including a pharmacogenomics DTC panel [75]. Of note though, the DTC pharmacogenomics panel can report general information about whether the consumer has variants that influence drug levels, but cannot relate specific variants to clinical response or treatment recommendation for any specific drug [75]. Furthermore, the FDA issued a safety communication in 2018 specifically regarding DTC pharmacogenomic tests cautioning consumers and healthcare providers that many of the purported drug-gene associations being tested were neither scientifically nor clinically verified and these tests had not been reviewed by the FDA [76]. There is, however, recognition that the growing availability of DTC testing and public involvement will push the need to understand the ethical and social implications of integrating genomics into healthcare practice, as well as putting onus on the healthcare sector to improve workforce genomic literacy [77].

Genomics in Drug Discovery and Development

Genomics provides exciting tools to support drug discovery and development. The clinical goals of pharmacogenomics are to maximise drug efficacy, avoid adverse drug reactions, and target responsive patients. Although the FDA have included details of 298 drug-gene pair associations in drug labelling, only 15% of these associations are based on convincing randomised clinical trial (RCT) data [3, 78]. Genotype-based RCTs have only been initiated over the past 15 years. Using genotype as a biomarker of drug response or toxicity in RCTs enables the use of smaller sample sizes, a decrease in costs, an increase in the likelihood of success, and the minimisation of adverse events.

The results of such genotype-guided RCTs are varying, and highlight the challenges and barriers to delivering pharmacogenetics in a clinical setting [16, 20, 79,80,81,82,83]. Factors beyond the gene-drug relationship will impact widespread test adoption such as stakeholder acceptance of pharmacogenetics, provider and patient education, optimal messaging of pharmacogenetic pharmacotherapy recommendations, and robust information technology infrastructure.

Studies have been carried out that estimate that drug mechanisms with genetic support are twice as likely to succeed, compared to those without it (from phase I to approval) [84, 85]. Therefore, increasing the proportion of discovery and development activities focused on targets with genetic support and allowing genetic data to guide selection of the most appropriate indications should lead to lower rates of failure due to lack of efficacy in clinical development. Genetic resources that integrate all known genetic associations to identify potential causal disease pathways could be an important tool for drug discovery and development. Such a resource could be used to prioritise projects and help reduce attrition rates in clinical trials.

Given the high attrition rates, substantial costs and slow pace of new drug discovery and development, repurposing of ‘old’ drugs to treat both common and rare diseases is increasingly becoming an attractive proposition. This involves the use of de-risked compounds, with potentially lower overall development costs and shorter development timelines. This has recently been used for a number of traditional drugs for the treatment of COVID-19 [86, 87].

Drug repurposing (also called drug repositioning, reprofiling, or re-tasking) is a strategy for identifying new uses for approved or investigational drugs that are outside the scope of the original medical indication [88]. This strategy has improved in the past 20 years based on new discoveries including, more recently, genetic information [89,90,91,92]. Thus, where an existing drug targets a gene product or pathway of a disease different from the original indication, fewer clinical trials may be needed to alter the licenced indication, as safety has already been demonstrated. An example of repurposing is sildenafil, initially produced with the expectation of reducing angina, and later found to treat erectile dysfunction and pulmonary hypertension [93, 94]. Evidence exists for repurposing of drugs and candidates for drug development in the context of coronary artery disease, suggesting that in silico analysis using existing databases and genetic findings may be useful to accelerate translation into clinical practice [95, 96]. Clinical trials are now needed to explore the potential value of these agents. Population selection based on genotype could theoretically streamline repurposing.

Mendelian Randomisation

Mendelian randomisation (MR) is a technique which uses genetic proxies for exposures of interest to support causal association with an outcome of interest, under set assumptions [97]. As loci are randomly allocated during miosis events, this can be viewed as a genetic equivalent to a prospective randomised controlled trial, with randomisation at birth [98]. Therefore, MR is a form of experimentation that can add support for a causal relationship to an otherwise observational clinical cohort dataset prone to complex confounding and reverse causality [97].

This is highly relevant to cardiovascular pharmacology and serves as a useful mode of target validation for therapeutic design, as well as drug repurposing [99, 100]. MR can be done using retrospectively collected cohort data to support therapeutic target validation for repurposing prior to clinical trials. One study, for example, used genetic tools to mimic the action of an IL6 inhibitor, such as those used in rheumatoid arthritis (i.e. tocilizumab), to demonstrate decreased odds of coronary artery disease [101]. MR can also provide useful confirmation of a target of interest for drug design or to support a clinical trial. It may also be helpful in predicting negative trial results and adverse effects of drugs, and thereby avoiding taking therapeutics likely to be ineffective or harmful into clinical trials. One group of investigators used a PLA2G7 loss of function variant analogous to the use of the Lp-PLA2 inhibitor darapladib to reach conclusions concordant with negative clinical trials in that there was no effect on major vascular disease. The authors conclude that using such techniques could limit investment in clinical trials that might be predicted to be negative using genetic tools [102]. In another example, MR was used to demonstrate conclusions in keeping with clinical trial findings for ivabridine, using a HCN4 locus variant that mimics the effect of the therapeutic [103]. MR can also discourage further clinical investigation of hypothesised targeted therapy based on observational data that may be biased by confounding. An example is a study that did not find evidence to support a causal association between vitamin D or fatty acid supplementation and altered risk of major depressive disorder (such an association was suggested by observational data and therefore proposed as a potential therapeutic intervention) [104].

MR thus offers added value to existing data and can be used to test the effect of risk variants on an outcome of interest. This could be particularly helpful within the context of PGx, as higher quality evidence could be yielded from observational databases to link PGx variants of interest to adverse outcomes within a drug exposure group and thus predict the effect of PGx-guided changes in therapy to avoid adverse outcomes (via either decreased efficacy or increased toxicity). This may also be a pragmatic way to address demands for increased evidence prior to PGx implementation in the absence of prospective randomised controlled trials for every proposed drug-gene pair (clearly not a viable option due to scale). A Scottish team exemplified this approach by using a genetic variant as a tool to stratify diverse outcomes for stroke patients collecting clopidogrel prescriptions; they demonstrated increased risk in those post-stroke patients taking clopidogrel who had a loss of function variant in the enzyme needed to convert the inactive prodrug into its active metabolite [105]. There are limitations that need to be acknowledged, as with any methodology. Importantly, there is a risk of undetected horizontal pleiotropy if the variant used as a proxy for exposure impacts the outcome through mechanisms other than the modelled exposure [106].

Another potential use of MR in PGx is as a biomarker validation tool within complex systems [100]. This can be a biomarker of interest within the biologic pathway or may indicate genetic metabolism phenotype; MR can support or decry a causal relationship with a cardiovascular outcome of interest, which supports rational targeted therapeutic development and use [100]. An example is a negative MR study that used genetic markers for heightened CRP to conclude that CRP does not have a causal link with coronary disease [107].

MR can also be used as a pharmacovigilance tool [108]. Several studies showed increased risk of diabetes with LDL lipid lowering via two different pharmaceutical agents with differing drug targets and mechanisms of action. In one such study, the investigators used genetic PCSK9 variants as a tool to demonstrate that lower LDL lipids were accompanied by a higher fasting glucose, increased central adiposity, and increased risk of T2DM [109]. Likewise, an additional study used MR with genetic proxies for statin-targeted therapy, HMGCR variants to show that genetically simulated statin therapy, in addition to the desired lowering of LDL cholesterol, increased weight, central adiposity, fasting glucose and insulin levels, and risk of new onset type 2 diabetes [110]. A further study found that genetically simulated exposure to PCSK9 inhibitors increased the risk of Alzheimer’s dementia more than it decreased risk of coronary artery disease [111]. A MR of antihypertensives identified a new association of non-dihydropyridine calcium channel blockers with a small increase in risk of diverticulosis [112]. MR has also been proposed as a pandemic tool of choice to investigate areas of therapeutic debate such as the use of ACEi/ARBS in patients with Covid-19 [113].

MR holds much promise in PGx going forward, in target validation, repurposing, pharmacovigilance, and interfacing with polygenic risk scores. Furthermore, use of MR as a screening tool may increase the efficacy of funds invested in clinical trials. As with other forms of large genetic data resources, academia may collaborate with industry, particularly direct-to-consumer (DTC) testing, in an evolving way in the future (due to the benefits of pooled resources and the scale of DTC testing undertaken). It is important to consider the scientific benefits of increased collaboration as well as the significant potential ethical and regulatory concerns.

Future Directions—Projections

It might be said that the promise of a genomics revolution in healthcare has grown long in the tooth. However, over the last decade, multiple early adopters have implemented pharmacogenomics into clinical practice demonstrating feasibility, patient benefits, and shared lessons learned [24, 25, 114,115,116]. While these early starters have been mostly specialised academic centres, at least 14 national genomic medicine initiatives are now underway [117]. For example, following on from the 100,000 Genomes Project, the UK has committed to sequencing five million genomes in 5 years, and pharmacogenomics is anticipated to be an integral facet of the NHS Genomic Medicine Service [118, 119].

These initiatives are driving transformative change while addressing implementation barriers and accruing real-world big data for analysis to generate the evidence for further adoption [117]. To illustrate, a recent large-scale pharmacogenomics analysis using UK Biobank (200 drugs; 9 genes; 200,000 participants) has confirmed several drug-gene associations (e.g. warfarin-CYP2C9, simvastatin-SLCO1B1), provided strong evidence for relatively novel associations (e.g. warfarin-rs3814637 in CYP2C19), and discovered new associations (e.g. nicardipine dose and CYP2C19 metaboliser status) [120]. More generally, a transition from current healthcare models to learning healthcare systems/communities that bring healthcare delivery, real-world data collection, evidence generation, implementation, and follow-up outcome monitoring all closer together into an iterative positive cycle will optimise returns on investment in genomic and other novel technologies and so improve healthcare quality, encourage patient engagement, and increase the rate of adoption of validated new findings [121]. This must be coupled with guidance from professional bodies to support prescribers in implementing pharmacogenomic tools that are currently validated for clinical use.

Iterative implementation of cardiovascular pharmacogenomics within such a system will require increased multidisciplinary working, for instance between primary care physicians, clinical pharmacologists, pharmacists, cardiologists, geneticists, bioinformaticians, clinical academics, patient advocacy groups, funding stakeholders, and industry partners (e.g. genomic providers including DTC services and app developers). Coupling of large datasets to artificial intelligence and machine learning approaches will offer further insights, for example, facilitating interpretation of previously uncharacterised combinations of variants. For example, a neural network model has improved CYP2D6 genotype-to-phenotype translation from sequenced data, which may have utility with flecainide and propafenone as well as metoprolol and other beta blockers metabolised by CYP2D6 [122, 123].

Furthermore, while RCTs represent gold standard evidence, there are inherent limitations to pharmacogenomic RCTs including:

  • the number of drug-gene/variant associations identified in observational data is outstripping the resources and time required to individually test them in RCTs,

  • differences in variants between ethnicities can limit RCT generalisability,

  • pharmacogenomic RCTs can require relatively large sample sizes due to only a proportion carrying the variant(s) of interest, and

  • there remains a lack of consensus on the evidential threshold required for prescription optimisation biomarkers such as pharmacovariants [23, 124].

Thus, real-world big data are expected to play an increasingly prominent role in generating the evidence to inform appropriate utilisation of pharmacogenomics.

Moving forward, polygenic risk scores for cardiovascular diseases combined with clinical risk factors may refine individual risk predictions to facilitate more informed patient-physician interactions regarding the benefits of starting cardiovascular (e.g. primary prevention) drugs for the individual patient. Forthcoming polygenic risk scores are also anticipated to improve adverse drug reaction risk predictions. Advances in prediction of toxicity, such as drug-induced LQTS, may be facilitated by basic science studies using in vitro models as demonstrated by prior work in the context of drug-induced liver injury [125]. Integration of genomic data with other -omics data (e.g. transcriptomics, proteomics, metabolomics) into multi-omics models is improving our understanding of cardiovascular and drug actions; the latter is exemplified by a systems pharmacology approach describing how antiretroviral therapy can alter the activity of an atherosclerotic regulatory gene networks and so may promote coronary artery disease [126, 127]. Importantly, such systems biology approaches, as well as Mendelian randomisation and human gene knockout investigations, are expected to drive development of novel therapeutics in the cardiovascular space, such as novel drugs to stabilise atherosclerotic plaques.

Lastly, pharmacogenomics will also offer a route to understand adverse event signals that emerge from novel therapeutics. Fortunately to date, the anti-PCSK9 siRNA therapeutic, inclisiran, has not shown haematological or immunological adverse events [128]. However, such events and, in particular, thrombocytopaenia, have been reported with a range of antisense oligonucleotide (ASO) therapeutics. It has been observed that phosphorothioate-containing ASOs can bind platelet proteins including platelet factor 4, which is reminiscent of heparin-induced thrombocytopaenia that has been associated with pharmacogenomic signals, suggesting that ASO safety pharmacogenomics studies could be informative [129, 130].