Introduction

Heart transplantation is considered a definitive therapy for end-stage heart failure and has been a life-saving option for more than 50 years. Both refinement and expansion of selection criteria for choosing recipients and donors along with decades of lessons learned regarding procurement and immunosuppression strategies have yielded major gains in the life trajectory of patients living with transplant, despite inevitable morbidity and accelerated mortality.

Contemporary 1-year survival rates in North America approximate 91%, with global median long-term survival of 13.9 years conditional on 1-year survival [1, 2]. Mortality < 30 days after transplant is largely from graft failure, associated with procurement and other surgical complications, acute and severe rejection, or from unknown etiology (primary graft failure). In contrast, > 30% of deaths within 1 year are associated with infection, based on data from an international cohort from 2009 to 2016 [3]. Life-threatening infection in transplant recipients can occur with opportunistic pathogens such as disseminated cytomegalovirus (CMV), but much more commonly from bacterial or fungal organisms, either nosocomial/hospital-acquired or community-acquired and enriched in specific endemic areas, e.g., nocardia, coccidiosis, and tuberculosis [4].

Contemporary heart transplant candidates have increased rates of hypertension, diabetes, and prior malignancy. In the context of new allocation schemes that prioritize critically ill patients, there is an increasing number of patients transplanted from intensive care settings on temporary mechanical circulatory support, with indwelling lines and often with recently treated or persistent pre-transplant infections, e.g., left ventricular assist device (LVAD) driveline infections or hardware endocarditis [1, 5]. Yet, despite the overall increased complexity and debility of heart transplant recipients, nearly 50% continue to receive induction therapy followed by triple oral immunosuppression with fixed dose protocols, suggesting a lag in nuanced approaches to tailoring therapies to individual patient risk/benefit profiles [6, 7]. Mid and late mortality is mostly associated with malignancy, accelerated by chronic immunosuppression, along with coronary artery vasculopathy (CAV) leading to restrictive physiology and other end-organ failure or sudden death [2, 8, 9]. Yet extreme longevity with heart transplant is indeed possible, with a small global cohort currently surviving more than 30 years, despite not having had decades of benefit from novel drugs, minimized immunosuppression regimens, or non-invasive screening platforms for rejection — all measures that have substantially decreased the burden of side effects and complications associated with long-term care.

Newer approaches to balancing rejection with infection require objective measurements of this balance, informed by individual patient characteristics such as age, above mentioned co-morbidities, and also life goals. An elderly patient with prior malignancy in remission might accept more rejection risk and even tolerate modest rejection events (treated with more conservative immunosuppression protocols) to avoid infections resulting in prolonged hospital stays, or escalated risk for recurrent malignancy associated with augmented immunosuppression. In contrast, a younger patient might tolerate higher doses of immunosuppression, side effects, and recurrent, but not life-threatening infections to exact a very long-term post-transplant survival. Thus biomarkers, or biological marker approaches to patient management in this context, provide an opportunity to rationally deviate from standardized protocols for post-transplant care.

Familiarity with Biomarkers for Rejection in Solid Organ Transplant

A biological marker, or biomarker, as specified by the Biomarkers, EndpointS, and other Tools (BEST) FDA-NIH Biomarker working group, is a well-defined and validated measurement of a normal or pathogenic biologic process or a response to an exposure or intervention [10]. Biomarkers are further subcategorized by their application. Specific to solid organ transplant, a diagnostic biomarker confirms the presence of a disease state, and as an example, rejection as allograft injury can be identified by using donor-derived cell-free DNA (dd-cfDNA) testing platforms [11]. Abnormal threshold results from dd-cfDNA testing may result in additional testing, e.g., endomyocardial biopsy or echocardiogram, or a treatment decision, e.g., oral pulse steroids. The same test may then be used as a monitoring biomarker, to serially test for return to normative values, indicating resolution of allograft injury. Using the same example of rejection, presence or absence of de novo donor-specific human leukocyte antigen (HLA) antibodies may be used as a predictive/prognostic or risk biomarker, as development of these antibodies after transplant markedly increases risk for antibody-mediated rejection, accelerated development of CAV, and early death even when no rejection or evidence of allograft injury has yet occurred [12]. As with all assays, the biomarker must be compared against a gold standard, with sensitivity, specificity, and predictive values dependent on enriched cohorts with high event rates (testing for cause rather than as screening), unless negative predictive value is the major goal. Biomarkers for rejection are poised to be interpreted in the context of the increasing use of biomarkers to assess for infection in transplant recipients.

Biomarkers for Net State of Immunosuppression in Solid Organ Transplant

Infections after orthotopic heart transplant are common and are associated with increased mortality and morbidity that detract from the recipient’s quality of life. Net state of immunosuppression is a collective term describing an individual transplant recipient’s risk for infection, rejection, morbid drug side effects from immunosuppression, and the cumulative toll on overall quality and quantity of life after transplant. An important advance would be to identify and validate biomarkers to prognosticate a survival trajectory and provide monitoring to manipulate therapies and strategies to find the right balance between infection and rejection. Unfortunately, a single test to understand net state of immunosuppression for a given patient remains elusive. The monitoring of immunosuppressant through levels alone is not sufficient to mitigate the risk of overimmunosuppression. Likewise, although leukopenia and lymphopenia have been well recognized as important risk factors for post-transplant infections, many individuals who develop infections have neither leukopenia nor lymphopenia [4]. Accordingly, more refined tests to better understand the net state of immune activity remain an unmet need with several promising tests in development.

Candidate biomarkers may be pathogen or non-pathogen specific; a biomarker may indicate active infection with a specific pathogen, e.g., CMV, or it may indicate an overimmunosuppressed patient for which minimizing immunosuppression may be sufficient to decrease the presence of bystander infection, e.g., low level Epstein-Barr viral (EBV) copies. Non-pathogen-specific biomarkers measure components of the innate, adaptive, and complement immune responses, or in aggregate as a biomarker risk score to predict infection events and inform on risk of not minimizing immunosuppression, especially in the context of objective measurements of rejection quiescence (with invasive biopsy or dd-cfDNA biomarker assays) [13].

In this review, we will discuss several pathogen and non-pathogen-specific biomarkers that may be utilized in heart transplant recipients for clinical immunologic monitoring. Where appropriate, we will mention at which critical phase of transplant — pre-transplant, early post-transplant, and long-term — that the biomarker(s) might be used most judiciously. We will propose a working algorithm to monitor heart transplant recipients, acknowledging that data on individual biomarkers remain sparse, single-center, and with modest level of evidence.

Functional T Cell Activity Assay Using Adenosine Triphosphate (ATP) Quantification

The ImmuKnow® assay (Viracor-IBT, previously Cylex) is an in vitro functional assay developed to measure T cell activity in immunosuppressed patients and was approved in 2002 by the US Food and Drug Administration for this purpose [14, 15]. In conjunction with other clinical assessments, this assay can help guide the individualized tailoring of immunosuppression therapy, with the goal of achieving balance between preventing infection and avoiding rejection. Central to solid organ transplant (SOT) immunosuppression regimens is use of life-long calcineurin inhibitor (CNI) drug therapy for most patients. CNIs are potent suppressors of T lymphocyte activation, and while monitoring drug levels can assess for drug toxicity and compliance, drug levels cannot capture the therapeutic effect of suppressing T lymphocyte activation [16].

The ImmuKnow® assay relies on the processes of cell activation, cell selection, and measurement of intracellular ATP release. ATP concentration has been previously correlated with cell count and proliferation and is an established biomarker of cellular activity [17]. In this assay, T lymphocytes in peripheral whole blood undergo phytohemagglutinin stimulation and are quantified using anti-CD4 monoclonal antibody-coated magnetic beads. The activated lymphocytes are mixed with a lysing reagent to release intracellular ATP, which are then tagged with a luminescent marker to allow for quantification of ATP concentration with a luminometer [18]. ATP levels ≥ 525 ng/mL suggest a strong immune response, between 226 and 524 ng/mL, a moderate immune response, and ≤ 225 ng/mL a low immune response. These “tiers” were established in a cross-sectional multicenter study using a cohort of 155 healthy adults and 127 adult SOT recipients [14]. In the cohort, 92% of transplant recipients had ATP values < 525 ng/mL, whereas 94% of healthy adults had ATP values > 225 ng/mL. Importantly, heart transplant patients were not included in this index study, but the data were extrapolated to apply to SOT recipients in general.

Numerous other studies have attempted to correlate ATP levels with clinical outcomes in heart transplant recipients, specifically infection, rejection, and immunologic quiescence. These studies have shown discordant results, possibly due in part to small sample sizes and low event rates, heterogeneous definitions of infection and rejection, and variable timing between assay collection and outcome of interest [19]. For instance, Gupta et al. found no correlation between ATP levels and subsequent risk of infection or rejection, while Israeli et al. concluded the opposite [20, 21].

Other studies have suggested a positive correlation between low ATP values and risk of infection, but there were insufficient rejection events to identify any association [22, 23]. In an early study, 296 heart transplant recipients underwent 864 assessments with the ImmuKnow® assay. Individuals who developed an infection within a month of assay measurement had a lower average lymphocyte activation (187 versus 280 ATP/mL, p < 0.001) [22]. A second study evaluating 156 heart transplant patients confirmed this observation with actively infected patients who had a lower ATP level as compared to non-infected individuals (166 ± 143 versus 264 ± 180 ATP/mL, p < 0.001). Notably, there was no difference in ATP levels between patients experiencing heart transplant rejection as compared to non-infected comparators (273 ± 265 versus 264 ± 180 ATP/mL, p = 0.9) [24]. Similarly, two meta-analyses including multiple SOT recipients have led to conflicting conclusions. One analysis concluded that an ATP value of 25 ng/mL was associated with a 12-fold increase in risk of infection compared with higher ATP values, while the other analysis concluded that ATP values were neither sensitive nor specific enough to predict individuals at risk of either infection or rejection [25, 26]. Notably, the former meta-analysis was limited by few events in the heart transplant cohort (2 episodes of infection, 3 episodes of rejection), while the latter meta-analysis included only one heart transplant study.

The ImmuKnow® assay has the potential to be a powerful tool for immunosuppression management of post-heart transplant recipients. However, there are currently not enough data to definitively comment on the association between ATP values and subsequent risk of infection or rejection, highlighting the need for future prospective studies that include larger study populations [27]. It should be emphasized that the assay was developed to provide objective data on global cellular immunity and not necessarily to predict future risk of events. It has the additional advantage of detecting changes in lymphocyte activity over time in an individual, allowing for adjustments of immunosuppression therapy in a more personalized manner. Until more data become available, it may be best to consider the clinical utility of the ImmuKnow® assay for heart transplant recipients with this approach. Thus, using ATP quantification as a monitoring biomarker may provide a platform to wean immunosuppression and observe for infections in individual patients. All phases of transplant may be monitored using this approach.

ELISpot (Enzyme-Linked Immunosorbent Spot Assay)

The enzyme-linked immunospot (ELISpot) assay, derived from a modification of the enzyme-linked immunosorbent assay, was first described in 1983 as a method to detect and quantify antibody-secreting cells. The ELISpot is a functional in vitro assay that detects antigen-specific reactive lymphocytes via a solid-phase immunoassay of “spot” formation representing localized antibody production [28]. This technique has since been adapted for applications in organ transplantation to detect cytokine secretion, primarily of interferon-γ (IFN-γ), by activated T lymphocytes that are either donor-specific or, if donor antigen is unavailable, reactive to a panel-reactive T cell assay. The resolution of the ELISpot assay is such that cytokine secretion by single cells can be detected, thereby allowing for quantification of cell frequency [29].

First studied in renal transplant patients, this assay was proposed as a tool to provide pre-transplant risk stratification of allograft rejection and transplant outcomes across organ transplant recipients. Similar to ImmuKnow®, the ELISpot assay quantifies the frequency of antigen-reactive, cytokine-secreting lymphocytes in the peripheral blood in an attempt to quantify the net state of immunosuppression in an individual [30].

Available studies on the clinical utility of the ELISpot assay in heart transplant recipients are limited but show promising results. In one study, van Besouw et al. studied the efficacy of the assay in detecting reactive T lymphocytes by both the direct and indirect antigen presenting pathways [31]. The authors assessed the numbers of IFN-γ secreting T lymphocytes before, during, and after periods of acute rejection (AR) among 13 heart transplant patients. They found that the frequency of cytokine-secreting T lymphocytes increased significantly during episodes of AR and decreased significantly after treatment of AR. Furthermore, the assay was sensitive in detecting alloreactive lymphocytes via both pathways, albeit more so in the direct pathway. Another study demonstrated that increased immune reactivity as detected by the ELISpot assay may be associated with increased incidence of CAV, although the absolute difference in lymphocyte frequencies between the CAV + and CAV − cohorts was not significant [32]. In a more recent, however small prospective study of heart transplant recipients, the level of ELISpot response against CMV was associated with a higher risk of infection for higher levels of immunosuppression (low responders < 50), and higher risk for organ rejection with less immunosuppression (high responders > 100) [33].

Future studies are needed to evaluate the role of the ELISpot assay among heart transplant recipients, particularly during the pre-transplant phase to gauge whether the number of “primed” donor-specific reactive lymphocytes correlates with post-transplant outcomes. Mechanistically, it is plausible for this assay to serve as a biomarker of risk for post-heart transplant allograft rejection; however, it is unclear whether a similar relationship would exist with infection risk. At the present time, ELISpot continues to be used in transplant clinical trials as an indication for rejection, but there is no widespread use of this assay in clinical heart transplant medicine.

Plasma-Based Gene Expression Profiling (GEP) Score

The AlloMap® GEP peripheral blood test is the only non-invasive biomarker assay that is FDA-approved for clinical use to evaluate for rejection in heart transplant recipients > 55 days after transplant (Fig. 1). The AlloMap® test quantitates expression of 11 genes from peripheral blood mononuclear cells associated with immune activation and inflammation (specifically, components of the innate and adaptive immune pathways) [34, 35]. Interestingly, in a sub-analysis of the multicenter Outcomes AlloMap Registry (OAR), which included 1504 patients followed for a median of 382 days after heart transplant, there were 220 patients who developed a total of 284 infection episodes (IEp) requiring hospitalization. In this cohort, median GEP score taken within 30 days of IEp was 28 IQR 22–33.5 (with > 34 the threshold for rejection) with the median GEP 21 in patients with fungal infections (Fig. 2) [36]. These data suggest that a low GEP might indicate a higher risk for more severe infection, although in this observational registry trial, a low GEP prior to infection might have indicated an early phase of infection rather than a measurement of overimmunosuppression.

Fig. 1
figure 1

AlloMap genomic biomarker story. A Phases of development for the AlloMap gene expression profiling test using samples from the Cardiac Allograft Rejection Gene Expression Observational (CARGO) study. B Accepted phases of development for biomarker tests in general [35]. EMB, endomyocardial biopsy; PCR, polymerase chain reaction. All images reproduced with permission

Fig. 2
figure 2

AlloMap scores depending on type of infection. Median and range of AlloMap gene expression profiling (GEP) scores depending on type of infection drawn within 30 days of infection. Median GEP score in this study cohort was 28 (IQR 22–33.5). Dotted red line denotes the threshold of rejection of 34 [36]. All images reproduced with permission

Quantitative Immunoglobulins and Lack of Response to Vaccination

Hypogammaglobulinemia prior to transplant is a known risk factor for development of infections after transplant. In one meta-analysis of 1756 SOT recipients (lung, liver, heart, kidney), IgG < 400 mg/dL during the first year after transplant was associated with opportunistic and other infections and with a high risk for early mortality [37]. In clinical practice, serial assessment of IgG after transplant (although sometimes pre-transplant in severely debilitated patients or those with protein-losing enteropathy or prior chemotherapy), IgG level < 400–600 is often repleted with a one-time dose of 0.3 g/kg and then re-assessed. For patients with CMV mismatch (recipient negative, donor positive), use of CytoGam IVIG provides antibody protection against infection, given serially in the first year after transplant, although it is not routine to test for presence of these antibodies as evidence of successful infusion (https://www.fda.gov/media/77671). Related to this is lack of response to vaccines after heart transplantation as a measure of immune suppression of T and B cells. Lack of titer response to COVID-19 vaccines raised awareness for this issue, and while beyond the scope of this review, indicates a net state of immunosuppression with high risk against a specific pathogen [38, 39].

Complement

Complement proteins are both soluble and cell-bound and are important components of the innate immune system as non-cellular entities capable of recognizing pathogens and leading to opsonization, activated in the setting of inflammation, e.g., ischemia–reperfusion injury, and activated in antibody-mediated rejection or cytotoxicity. Low complement levels are associated with increased risk for infection, and anti-complement cascade therapies, e.g., eculizumab, are powerful inhibitors of acute and fulminant antibody-mediated rejection [40]. Integrity of the complement system may be assessed by measuring serum levels of C3 and C4, and thus these levels may serve as biomarkers for risk of infection. There is data in the SOT literature that reduced levels of complement proteins C3 and C4 within 1 month of transplant is associated with increased risk of infection within the first year, with threshold levels used mostly in a combined biomarker risk score (see below) [41].

Lymphocyte Subsets

Flow cytometry may quantify lymphocyte subsets, providing absolute numbers of CD4 + , CD8 + , and natural killer (NK) cells. While the primary function of CD8 + T cells is direct cytotoxicity of pathogens, CD4 + T helper cells activate other components of the immune response, including the innate pathway, macrophages, and B cells. Prolonged depletion of CD4 + cells, specifically when lower than 200 cells/μL, is associated with opportunistic viral, fungal, and parasitic infections, e.g., herpes and Pneumocystis jirovecii. While performing serial flow cytometry as a monitoring biomarker is not practical, it may be helpful in determining when to discontinue anti-infective therapies to prevent opportunistic infections, especially if resumed in the context of augmented immunosuppression associated with a recent episode of severe rejection.

NK cells, also quantified by flow cytometry, may be used as a biomarker to assess degree of immunosuppression associated with calcineurin inhibition. NK cells are considered part of the innate immune system as their activation does not specifically require antigen presentation, and their primary role is to control viral infections. There are data to support that CNI drugs decrease NK cell function and that threshold low NK values are associated with increased herpes and fungal infections in SOT recipients [42, 43].

Non-pathogenic Immunologic Risk Score

Significant work has been done to use biomarkers to develop an immunologic risk score to predict risk of infection in heart transplant recipients. Sarmiento et al. performed a prospective study of 100 heart transplant recipients at 1 week after transplant. In this cohort, 96% received induction therapy with either IL-2 receptor antagonist infusions followed by triple immunosuppressive therapy. All patients were assessed for the following biomarkers: quantitative immunoglobulins (IgG, IgA, IgM), complement proteins (C3, C4), and lymphocyte subsets (CD3 + , CD4 + , CD8 + T cells, B cells, NK cells). Serious infections within 3 months after transplant were recorded and categorized as requiring hospitalization and intravenous antimicrobial therapy. They found that 33% had a serious infection with significantly associated risk biomarkers of IgG < 600 mg/dL (HR 2.41), C3 < 80 mg/dL (HR 4.65), C4 < 18 mg/dL (HR 2.3), NK count < 30 cells/μL (HR 4.07), and CD4 + cell count < 350 cells/μL (HR 3.04). An immunological risk score was generated based on weights of individual HR, and in multivariable regression analysis, a score > 13 predicted heart transplant recipients to be at the highest risk for infection (HR 9.29, p < 0.0001) (Fig. 3) [44•]. This was an important study, as more informal algorithms are done clinically based on these and other biomarkers, to determine whether to minimize or how quickly to taper immunosuppression, with some patients quickly tapered to single drug therapy, e.g., CNI.

Fig. 3
figure 3

Multivariable immunological risk score for infection after transplant. Proposed application of a 5-variable immunological risk score to guide duration of antimicrobial prophylaxis after transplant. A score > 13 predicted heart transplant recipients to be at highest risk for infection (HR 9.29, p < 0.0001) 44•. C3, complement factor 3; NK, natural killer cells; CD4, T cell subset lymphocytes; IgG, immunoglobulin G; C4, complement factor 4; CMV, cytomegalovirus. All images reproduced with permission

Cell-Free DNA (cf-DNA) as Non-pathogenic or Pathogenic Assay

In the post-transplant setting, identifying the causative organism of an infectious syndrome is often subject to the limited sensitivity of culture-based diagnostics. Furthermore, many opportunistic pathogens, to which this population is vulnerable, cannot be easily isolated through standard culture methods. This may be further augmented by clinical factors which preclude invasive diagnostics, such as biopsy or bronchoalveolar lavage, or necessitate an urgent diagnosis. The ability to isolate and sequence circulating pathogen cf-DNA from plasma for unbiased, metagenomic next-generation sequencing has improved our capacity to identify the cause of infections in transplant patients. By testing for trace quantities of viral, bacterial, fungal, and protozoal DNA in plasma, this approach circumvents many of the aforementioned diagnostic challenges.

As of this writing, there is a single company offering cf-DNA testing for infections, Karius®, performed in their lab facilities in Redwood, California. Once plasma is collected and sent to the core commercial lab, initial steps involve the creation of a next-generation sequencing library from circulating microbial DNA through a proprietary preparation. DNA fragments are amplified, undergo sequencing, and are compared against a library of over 1000 pathogens. A report is then generated with all detected microbes that meet a pre-defined quantitative threshold. There are > 40 published case reports and case series which highlight the utility of the test to identify opportunistic pathogens. In a retrospective study of hematologic malignancy and allogeneic stem cell transplant populations with neutropenic fever, results led to escalation or narrowing of antimicrobials in 59% of cases [45]. While no case series specific to SOT patients have been published to date, case series in SOT, published data in other immunocompromised cohorts, and our institutional experience to date support a role for this test in the SOT population [46].

Presently, challenges to wider adoption of this testing include a high cost per test. Despite upfront expenses, the test offers an opportunity to offset the cost of broad infectious workups and empiric antimicrobial therapy. The turnaround time for this testing can be as low as 48 h, which may reduce length of hospital stay for patients awaiting diagnostic procedures. In a published cost-comparison modeling by Karius® for invasive fungal infections, savings were realized through a reduction in the requirement for bronchoscopy, projected reduced duration of hospitalization, and an expected reduction of adverse events during hospital course [47]. At what stage during an occult infectious evaluation that cf-DNA testing for pathogens be utilized to optimize any cost benefits remains to be determined. Beyond the potential benefit of cost reduction, more studies are required to determine whether this testing contributes to improved clinical outcomes, particularly in those with diagnoses unlikely to be achieved through traditional diagnostics.

Another chief criticism arises from indiscriminate sequencing of this test and the potential to identify non-pathogenic, commensal organisms. Results may capture anything from normal skin flora to detection of low levels of CMV and EBV. Though in most instances the causative pathogen will be easily identified among a list of commonly isolated viruses and bacteria, in some cases, clinicians will need to make treatment decisions when faced with a report of probable. Similarly, when faced with a negative report, cessation of antimicrobials will still be guided by the clinical circumstances of the case. While a single-center study of 100 pediatric patients including immunocompromised patients reported a sensitivity, further evidence is needed to understand the negative predictive value of this study, particularly when infections are localized [48]. Despite these limitations, microbial cf-DNA testing offers a new tool to diagnose infections, circumventing many of the challenges presented by traditional, culture-based diagnostics.

Torque Teno Virus as Non-pathogenic Assay

The Torque teno virus (TTV) is a single-stranded DNA virus within the Anelloviridae family and accounts for a large proportion of the human virome. Anelloviral infections are acquired early in life regardless of gender and socio-economic factors. The most likely transmission routes include oral-fecal, transplacental, respiratory, and transfusion, although others may also be feasible [49]. One remarkable feature is that individuals co-infected with multiple TTVs can be acutely infected by yet another one [50]. The virus has an extraordinary ability to establish chronic infection in the majority, if not all, of the exposed individuals [51]. Epidemiological surveys employing molecular techniques show that, at a given time, two-thirds of the general population carry TTV in their plasma [52]. The prevalence of TTV infection increases to 99% in SOT recipients [53]. Despite how ubiquitous this virus is, there is no clear association between infection and any clinical manifestations. Quantitative real-time polymerase chain reaction assays diagnose chronic infection [49]. Other characteristics are worthwhile to point out — TTV is unaffected by conventional antivirals used in SOT recipients, and the amount of virus identified by molecular techniques is proportional to the net state of immunosuppression [49, 54]. Collectively, the epidemiology and virology of TTV infection in SOT recipients facilitate its application as an immune biomarker [55].

Post-transplantation, after immunosuppression is introduced, TTV viral load kinetics follow a similar pattern across the different SOT populations. The viral load rapidly increases in the first few weeks post-transplant and peaks between 3 and 6 months post-transplant. At that point, the viral load enters a steady state [56,57,58,59,60,61,62]. There are now retrospective and prospective studies both in the adult and pediatric populations and in different transplant types that have found an association between increasing or high TTV viral loads and risk of infectious complications [57, 58, 61, 62]. Similarly, there are also studies describing an association between a low viral load at different points and a risk of rejection [56,57,58,59,60, 62]. Taken together, these studies provide the proof of concept that the TTV viral load could be used as an immune biomarker to guide the level of immunosuppression required to minimize the risk of both infection and rejection. Before implementing TTV viral load in routine clinical practice, well-performed clinical trials are required to assess the efficacy of this intervention to the current standard of care. The overarching goal would be to minimize risks and improve post-transplant patient survival [63].

Conclusions

Biomarker approaches for precision in managing heart transplant patients over the transplant lifespan is a promising area of medicine. There are numerous pathogen and non-pathogen-specific tools that when used in combination and serially may yield gains in minimizing immunosuppression to avoid serious infections without unduly risking increased rejection. Several studies are ongoing to further investigate optimal algorithms, and toward this end, we propose one algorithm encompassing the above biomarkers reviewed above (Central Illustration Fig. 4).

Fig. 4
figure 4

Proposed algorithm for combined biomarker testing at all phases of transplant. Suggested algorithm to incorporate candidate biomarkers at key time points throughout all phases of transplant. Rejection surveillance should also be performed routinely