Induction and Maintenance Agents
- 58 Downloads
Immunosuppression has increased the success of human heart transplant. There are specific patient groups that benefit from the use of induction therapy though the use of induction therapy has not shown universal benefit for all patients. Currently heart transplant centers employ dual or triple immunosuppression regimens with a calcineurin inhibitor as the base of the regimen and consideration given to antiproliferative agents or PSI/mTOR as the additional agent, with or without corticosteroids. The tolerability of agents can limit their use despite the therapeutic benefit, in addition to immunosuppression, toward renal sparing effects, viral sparing effects, or vasculopathy benefits. The choice of agents make individualization of immunosuppression possible based on a specific patient’s needs.
KeywordsMaintenance immunosuppression Heart transplant induction PSI/mTOR antiviral properties PSI/mTOR CAV C2 monitoring Calcineurin inhibitors Antiproliferative agents
The first heart transplant in 1967 had very limited immunosuppression choices, and with no prior experience in human heart transplantation, the use of available immunosuppressants was part of the experiment of human heart transplant. 6-Mercaptopurine and steroids were the extent of immunosuppressive medications, with presumed rejection treated with increased doses of steroids.
Kaplan-Meier 1-year patient survival rates for transplants performed between 1980 and 2015
Era of cyclosporine
Year of transplant
Number of transplants
95% confidence interval
Immunosuppression is used in three ways – induction, maintenance, and rejection. There is not a uniform approach with different induction and maintenance protocols utilized at different transplant centers. The use of induction therapy has varied with the changing availability of agents over time. In order to avoid rejection, maintenance immunosuppression has to be taken by patients life long after heart transplantation. These two uses of immunosuppression will be discussed in this chapter, while the treatment of rejection will be discussed elsewhere.
Most heart transplant centers utilize some sort of augmented increased immunosuppression at the time of transplant – either steroids alone or antibody products. These are generally used for a short period of time and limited to the time surrounding the transplant surgery. Maintenance immunosuppression may be started at the time of transplant, but induction therapy allows adequate immunosuppression, while maintenance immunosuppression gets to therapeutic levels, or it can also allow a delay of some parts of maintenance immunosuppression. While some transplant centers implement a universal induction protocol, others selectively utilize induction based on certain clinical considerations, e.g., high immunological risk and renal dysfunction.
Induction therapy choices have been limited due to market availability. There were two IL-2 receptor antagonists available historically; however daclizumab was removed from the market leaving basiliximab as the only IL-2 antagonist available. Basiliximab has been compared to no induction, and compared to antithymocyte globulin, rabbit (rATG) with the primary outcome studied being biopsy-proven acute rejection (BPAR). The confounding issues with these studies are that the grade of BPAR varied among different studies (i.e., any BPAR, grade ≥ 1B, or grade 3 or 4) and the time of follow-up differs from months to up to a year posttransplant. A multicenter study examining 56 patients undergoing heart transplant who were randomized to the use of basiliximab versus matched placebo with the same schedule of administration on postoperative days 0 and 4 and initiation of maintenance immunosuppression matched between study arms showed that there was a non-statistical difference in time to grade 3A or greater rejection between the groups (Mehra et al. 2005). A different study which examined basiliximab versus no induction showed that there was a numerical difference but not a statistical difference in rejection rates between basiliximab induction and immediate cyclosporine initiation (Rosenberg et al. 2005). The results of these studies did not demonstrate any statistical difference in BPAR between patients treated with basiliximab and those who were randomized to no basiliximab. Despite the lack of differences in these studies, there remain a large number of patients who receive basiliximab induction at heart transplant centers worldwide.
Antithymocyte globulin products have been used in transplant since nearly the beginning of transplant; however it has been more recently that these products are standardized and available commercially. The initial availability of these products was from universities that were making a horse-derived ATG product for their own use and occasionally making it available to other centers as part of a clinical trial. There are one rabbit-derived ATG (rATG) product licensed in the USA and an additional rATG product available in Europe. Although rATG is not approved for induction therapy in the USA, it has gained favorability as an induction agent. There have been a few studies that examined the use of rATG versus basiliximab for induction. In an examination of non-inferiority of rATG to basiliximab, a group in Canada looked at 35 patients and found that the 2 agents were not equivalent but that grade 3A or 4 rejection was seen less often in the rATG group compared to basiliximab induction (Carrier et al. 2007). A retrospective evaluation for safety and efficacy of 48 patients who had received either rATG or basiliximab as part of their standard induction at the time of heart transplant showed that there were more infections in the group that received rATG; however, the average biopsy score, equating to less severe rejection, was lower at all time points over the first year in the rATG group. Further, there were more episodes of grade 3A or higher rejection in the first 6 months in the basiliximab versus rATG groups (Flaman et al. 2006). There have been safety concerns with the more widespread use of rATG as an induction agent, and it has been shown to have more fever, leukopenia, and thrombocytopenia, which are known and transient side effects when it is used for rejection or induction, but there was no increase in the rate of serum sickness, rash, or anaphylaxis between rATG and basiliximab (Mattei et al. 2007). The concern for posttransplant lymphoproliferative disease (PTLD) has been associated with monoclonal antibodies from the start of their use in transplant. With the current induction regimens (basiliximab, a monoclonal antibody, and rATG, a polyclonal antibody) with lower cumulative doses of rATG used and the near standard use of antiviral prophylaxis after transplant, there is a less than 1% per year incidence of PTLD that has been seen with rATG use as induction (Hertig and Zuckermann 2015; Marks et al. 2011).
Heart transplant centers that do not use induction therapy as a standard practice frequently employ it for specific patient populations. Induction therapy has been employed in patients who are at risk for renal dysfunction and for those patients who are at an increased risk for rejection.
Induction therapy has been employed to delay the initiation of calcineurin inhibitors and avoid further renal insult in patients with renal dysfunction pre-transplant or at potential risk of renal dysfunction during the perioperative and postoperative periods. Basiliximab has been compared to rATG (Delgado et al. 2005) with a delayed initiation of calcineurin inhibitors of 3–7 days. Renal function improved after transplant in both groups with no differences in renal function at 1 week, 1 month, or 6 months post-heart transplant; however there were fewer episodes of rejection seen in the rATG group, compared to basiliximab, over the first 6 months. In a population who developed renal dysfunction after heart transplant, Cantarovich et al. (2004) treated these patients with rATG after heart transplantation and delayed the start of cyclosporine until there was an improvement in renal function. Antithymocyte globulin was administered every 2–5 days with dose frequency determined by keeping the lymphocyte count below 200/mm3 with cyclosporine introduced when the creatinine fell below 150 μM, which was around posttransplant day 12 in the renal dysfunction group versus day 2 in the nonrenal dysfunction group. Patient survival was similar through 1 year, ejection fraction was not different between groups, and there was no difference in rejection rates over the first year after transplant in those patients who had delayed initiation of cyclosporine.
Patients who are identified to be an increased risk of rejection have been treated with induction therapy in an effort to decrease the rate of hyperacute rejection and early acute rejection. Antibody sensitization is a growing and continued problem in heart transplantation with an increasing number of patients presenting for transplant with a ventricular assist device which has been associated with the formation and persistence of antibody formation which can lead to antibody-mediated rejection (AMR) after heart transplantation. For patients who are at a higher risk for AMR (patients with circulating anti-HLA antibodies, multiparous women, patients who have received blood transfusions, or patients who are supported pre-transplant with a left ventricular assist device) or who have undergone a strategy to decrease the antibody load (see other chapters), rATG has been recommended as induction therapy (Kobashigawa et al. 2009; Aliabadi et al. 2013).
Alemtuzumab, a monoclonal antibody against CD-52, is currently available through a company supported distribution program to transplant programs with established induction protocols in place. There has been limited data regarding the use of alemtuzumab as induction in heart transplant patients. Data shows that compared to a no-induction group, alemtuzumab allowed lower levels of maintenance immunosuppression and less significant rejection over the first year (Teuteberg et al. 2009). However there have been no additional studies analyzing the use of alemtuzumab as an induction therapy, and with limited availability, it is not a recommended agent to be used as such.
Around half of patients undergoing heart transplantation currently receive induction therapy; however the current literature does not indicate a definitive benefit nor a detriment to using induction therapy in an entire population. There does appear to be a clear benefit of induction therapy in specific populations undergoing heart transplantation and should be used in these patients and further studied in other patient populations.
Maintenance immunosuppression choices were limited in the first years of heart transplantation; programs were dependent on the use of steroids and azathioprine. However as the choices expanded, survival improved; the introduction of cyclosporine in 1983 changed the success trajectory of heart transplant (Table 1). Maintenance regimens generally consist of a combination of a calcineurin inhibitor, an antiproliferative agent, and a steroid. Contemporary heart transplant has variations on this regimen where a proliferation signal inhibitor could be substituted for one of the above agents, steroids are not a part of many long-term maintenance regimens, and there are centers that use monotherapy early after transplant.
Calcineurin inhibitors (CNI) were added to the heart transplant armamentarium in the early 1980s. Cyclosporine was the first medicine advance in heart transplantation used as an investigational agent at a couple of centers as early as 1981 and became commercially available in 1983.
The CNIs inhibit T-lymphocyte activation by inhibiting the transcription of IL-2 and other cytokines. The available CNIs have different binding proteins, as well as different pharmacokinetic and pharmacodynamics, and different side effect profiles.
The initial use of cyclosporine was promising but limited by renal dysfunction. Despite the renal dysfunction, cyclosporine was incorporated into heart transplant protocols, over the up to that time standard of care. Conventional maintenance immunosuppression before the early 1980s consisted of azathioprine and prednisone. Early experience revealed that cyclosporine was effective, when compared to azathioprine/prednisone standard treatment, in preventing rejection, masking rejection (thereby making endomyocardial biopsy more telling of rejection episodes), and improving 1-year survival. With additional experience, after approval of cyclosporine, it was found that lower doses of prednisone could be used and lymphomas could be avoided with lower amounts of induction therapy and lower target levels of cyclosporine. Although cyclosporine (Sandimmune®) was first on the market, the majority of patients on a cyclosporine product now are on cyclosporine microemulsion (Neoral®) which increased absorption and decreased intrapatient trough level measures.
The CNIs are dosed based on blood levels. While tacrolimus trough level monitoring correlates will with AUC, cyclosporine has more intrapatient pharmacokinetic differences, and it has been found that 2-h blood concentration level monitoring (C2 monitoring) is a better indicator cyclosporine AUC than trough level monitoring. Initial C2 levels in the first 3 to 6 months after transplant should be 1000 ng/mL, and after 6 months, the C2 level can be decreased to 200–400 ng/mL (Cantarovich et al. 1998; Levy et al. 2002).
One analysis showed that in patients far out from transplant, there was no overt benefit of changing from trough level (C0) to C2 monitoring; neither rejection rates, blood pressure, or kidney function improved, but patients were able to take less cyclosporine after conversion to C2 monitoring (Hermann et al. 2011). C2 monitoring has shown to be related to lower rejection rates early after heart transplantation when the level exceeds the goal C2 level even if the C0 level is below goal. Severe rejection has shown to occur more often under C0 monitoring versus C2 monitoring. In the same study, kidney function was better in the C2 group versus the C0 group (Barnard et al. 2006). A narrow window of time to draw the C2 level makes this method a bit burdensome though it is likely a better method for monitoring cyclosporine.
Tacrolimus was introduced in the late 1990s and compared to cyclosporine in the initial trials. Despite the clear superiority of tacrolimus in kidney and liver transplant patients, there was initially no rejection or survival benefit of tacrolimus over cyclosporine. Two early studies examining tacrolimus in heart transplantation compared to cyclosporine (in combination with azathioprine and prednisone) showed comparable rejection rates and similar outcomes with regard to severity of rejection (Grimm et al. 2006; Taylor et al. 1999). One analysis showed a significant difference in rejection rates, favoring tacrolimus, when the biopsy samples were centrally graded versus local grading (Grimm et al. 2006). One of these studies also showed similar number of episodes of recurrent rejection, number of treated rejections, or number of patients dropping out of study due to intolerance of study drug (Taylor et al. 1999).
Tacrolimus-treated patients did have significantly lower blood pressures and less often required antihypertensive medications at all study time points up to the end of the 1-year study period than cyclosporine-treated patients. New-onset hypertension was seen more often in cyclosporine-treated patients. Cholesterol levels were significantly higher in cyclosporine-treated patients at all time points during the studies. New-onset diabetes after transplantation (NODAT) was seen at similar rates in cyclosporine-treated patients and tacrolimus-treated patients in one study, while the other study showed higher rates of NODAT in tacrolimus-treated patients (Grimm et al. 2006; Taylor et al. 1999). With wider use of tacrolimus after its introduction when comparing it to the microemulsion formulation of cyclosporine, numerous studies have shown less rejection episodes, longer survival, less recurrent rejection, and less severe rejections with tacrolimus while still noting that NODAT has a higher incidence with tacrolimus than cyclosporine microemulsion-treated patients.
Steroids have long been a part of every aspect of heart transplantation. As more maintenance immunosuppression came to market and was shown to be better at preventing rejection, when considering the cosmetic and metabolic effects of steroids, there was a desire to consider steroid minimization or steroid withdrawal maintenance immunosuppression protocols. Under cyclosporine-based regimens, steroid avoidance was attempted in numerous studies that showed comparable survival and similar rejection rates. With the introduction of tacrolimus as a replacement for cyclosporine in immunosuppression regimens, the general practice, now, is steroid avoidance/withdrawal when one of the dual immunosuppression agents is tacrolimus. There are two different steroid weaning strategies with both early and late steroid weaning showing advantages and disadvantages depending on the patient population and concurrent immunosuppression, so recommending a strategy based on the benefits of an early versus late strategy, or vice versa, is not possible. In one analysis of patients who had steroid withdrawal attempted 6 months after transplant, 92% of patients were successfully weaned off steroids. Those patients that were weaned off steroids had a higher 5-year survival and a high freedom from nonfatal major adverse cardiac events (Kittleson et al. 2013). Patients who should not be considered for automatic steroid withdrawal are patients that have a history of rejection (either ACR or AMR), the presence of donor-specific antibody, drug levels suggestive of nonadherence, re-transplant, or a heart transplant that was due to sarcoid or giant cell myocarditis.
But even dual immunosuppression has been too much for some centers. In the TICTAC study, patients were randomized to monotherapy tacrolimus versus dual therapy tacrolimus/mycophenolate after a short course of prednisone. The rejection scores at 6 and 12 months were not different between the two groups showing that monotherapy did not predispose to worse outcomes or more rejection. In fact, there was no rejection after 210 days after transplant. There were two episodes of antibody-mediated rejection that occurred in the first 3 months after transplant (Baran et al. 2011). There are not many centers that adopted tacrolimus monotherapy as a result of this study, but the immunosuppression properties of tacrolimus made this study possible to succeed.
Currently calcineurin inhibitors, more specifically tacrolimus, are still the most common agents used for immunosuppression after heart transplant. As the majority of programs employ dual or triple immunosuppression, the agent in addition to calcineurin inhibitor is less standardized.
The first heart transplants had a simple immunosuppression approach; due to lack of other immunosuppressants, 6-mercaptopurine, which was indicted for leukemia, was utilized as it was another year before azathioprine was approved by the FDA. Azathioprine, which is a derivative of 6-mercaptopurine, can be incorporated into replicating DNA and block purine synthesis in the de novo pathway.
However, even with years of experience with azathioprine, the acceptance of a new drug was quickly widespread. Mycophenolate mofetil was approved in 1996, in combination with cyclosporine and steroids. Mycophenolate mofetil is a prodrug of mycophenolic acid (MPA) which is an inhibitor of inosine monophosphate dehydrogenase which is the rate-limiting enzyme for de novo purine synthesis affecting lymphocytes. Patients taking mycophenolate have shown lower lymphocytes circulating than patients taking azathioprine.
The adverse effects of azathioprine, e.g., skin cancer and hepatotoxicity, made acceptance of an agent without these effects relatively easy. Mycophenolate was found, in early studies, to have gastrointestinal effects and bone marrow issues, but these could be treated with dose decreases. The initial mycophenolate study in heart transplant, in combination with cyclosporine, showed a reduction in the incidence of acute rejection (at 6 months) and a decrease in mortality at 12 months in mycophenolate- compared to azathioprine-treated patients (Kobashigawa et al. 1998). Mycophenolate 3 g daily was superior to azathioprine with regard to number of patients treated for rejection, less severe rejection, less rejection with hemodynamic compromise, and less death. The dosing of mycophenolate is somewhat varied across programs. Though mycophenolate was approved at 3 g daily in heart transplant, it was approved at 2 g daily in kidney transplant because the pivotal trial in kidney transplant showed no difference in rejection rates between doses, but 2 g daily was better tolerated than 3 g daily (Sollinger 1995). That comparison has not been done in heart transplantation. When looking at pharmacokinetic studies of mycophenolate in combination with cyclosporine or tacrolimus, they show higher MPA exposure in tacrolimus-treated patients at a constant mycophenolate dose. Lower doses of mycophenolate mofetil are required to achieve a similar MPA concentration when used in combination with tacrolimus versus cyclosporine (van Gelder et al. 2001). Cyclosporine inhibits enterohepatic recirculation of MPA glucuronide and therefor causes a lower MPA area under the curve then tacrolimus-co-treated patients or any patient on mycophenolate not in combination with cyclosporine. Trough level monitoring of mycophenolate does not correlate well with the area under the curve measurements, and if therapeutic drug monitoring is going to be employed for mycophenolate, it should be AUC measurements instead of trough level monitoring (Jeong and Kaplan 2007). Since full 12-h AUC measurements are not possible on a routine basis, a mini-AUC measurement over 4 h has been used (Van Gelder et al. 2006). Despite the literature on AUC measurements of mycophenolate, there has not been literature to show that universal implementation of mycophenolate AUC monitoring has had a benefit on heart transplant recipients.
Mycophenolate mofetil was quickly accepted in heart transplantation in place of azathioprine. Its initial use as a renal sparing agent or cardiac allograft vasculopathy (Kaczmarek et al. 2006) treatment has mostly been supplanted by proliferation signal inhibitors, but it is still an important part of triple or dual immunosuppression agents, especially early posttransplant.
Proliferation Signal Inhibitors/mTOR
Triple immunosuppression, or dual immunosuppression with prednisone removed, has proven to be effective for decreasing the incidence of cellular rejection after heart transplantation. There has been a desire for an agent that could afford longer survival and have decreased toxicities and to address adverse effects of other maintenance immunosuppressants. In 1999 sirolimus was approved by the FDA for prevention of rejection in kidney transplantation.
The two currently available PSI/mTOR, sirolimus and everolimus, block mammalian target of rapamycin which inhibits T-cell proliferation. These agents can also inhibit B-cell differentiation and proliferation. Both agents are only approved for kidney transplantation in the USA, though everolimus is approved for use in heart transplantation in most of the European Union.
When sirolimus was approved, there was a noted lack of nephrotoxicity in the labelling and in clinical trials in kidney transplant which made it an attractive agent for patients with calcineurin inhibitor-associated nephrotoxicity. However, it was found that sirolimus was not effective at reversing existing nephrotoxicity. The place of sirolimus in maintenance therapy was not immediately clear with some centers using it in place of calcineurin inhibitors, while others used PSI/mTOR inhibitors in conjunction with calcineurin inhibitors, in place of the antiproliferative agent. Despite the desire for an agent to use in nephrotoxicity, it was found that sirolimus caused proteinuria and delayed renal recovery in acute kidney injury.
The PSI/mTOR inhibitors cause a delay in wound healing so they have not been used immediately after transplant. Another effect of PSI/mTOR that limits the tolerability is the development of interstitial pneumonitis. This has been seen when patients are treated with either sirolimus or everolimus, it is usually reversible when the agent is removed and does not always reappear in changing to the alternate PSI/mTOR (Bouvier et al. 2009; Molas-Ferrer et al. 2013).
Sirolimus and, more recently, everolimus have been found to be advantageous in preventing/treating cardiac allograft vasculopathy (CAV), to have benefit in certain cancers, and to have an association with lower incidence of CMV when compared to other immunosuppression regimens.
The PSI/mTOR inhibitors have shown to be effective at decreasing CMV after heart transplantation. Viral entry into macrophages does not involve mTORC1, so there is no beneficial effect of PST/mTOR in early CMV. Late CMV replication is dependent on mTORC1 for production of pp65 and UL-44, which are late-phase proteins (Brennan et al. 2013), so PSI/mTOR inhibitors can prevent CMV replication and be beneficial in difficult-to-treat CMV infections or equates to lower rates of CMV infections in patients that are started early on PSI/mTOR inhibitors.
PSI/mTOR inhibitors have been found to be advantageous in slowing the progression of CAV. The antiproliferative effects of PSI/mTOR inhibitors may play a part in the slowing progression or preventing CAV through preventing neointimal thickening. PSI/mTOR inhibitors have also shown to decrease homocysteine levels thus reducing the possibility of hyperhomocysteine driving CAV development. Studies examining mycophenolate versus everolimus showed that there was a higher incidence of early CAV at 1 year in patients treated with mycophenolate, with a significant difference in the mean intimal thickening in the CAV patients. There was no difference in CAV progression between mycophenolate and everolimus patients when everolimus was added late after heart transplantation (Masetti et al. 2013). A smaller increase in mean intimal thickening or plaque volume was found in patient on PSI/mTOR versus calcineurin inhibitor; this finding appears limited to early inclusion (whether de novo or conversion) of PSI/mTOR agent in maintenance immunosuppression regimen (Masetti et al. 2013).
PSI/mTOR were initially investigated as anticancer drugs before being approved for prophylaxis of organ rejection. The PSI/mTOR inhibitors have anticancer effects with effects on the PI3/Akt pathway that subsequently decreases tumor cell proliferation while also blocking angiogenesis due to inhibition of VEGF production (Guba et al. 2004). The most robust data with PSI/mTOR inhibitors after transplant is in kidney transplant patients with skin cancer, with everolimus having been approved for cancer indications under a separate brand name and with different dosing, demonstrating there are clearly anticancer properties of everolimus.
In two studies that consisted of kidney transplant patients with no previous skin cancer and randomized to sirolimus or continuation of prior immunosuppression, there were a statistically lower rate of non-melanoma skin cancer (NMSC) in sirolimus groups versus continued immunosuppression and a statistically significant decrease in melanoma in the sirolimus group. However, one-quarter to one-third of sirolimus-treated patients needed to discontinue sirolimus during the study period (Salgo et al. 2010; Alberu et al. 2011). In two studies that looked at kidney transplant patients with prior NMSC, there was significant decrease in new NMSC in patients converted to sirolimus, with a statistically longer time to development of new NMSC in the sirolimus patients; however one-quarter of patients could not tolerate sirolimus (Campbell et al. 2012; Euvrard et al. 2012).
More relevant, there is one analysis of heart transplant patients. That study consisted of patients who had previous NMSC and were converted from azathioprine to everolimus. At a little over 2 years, the mean number of NMSC was significantly lower compared with a similar timer period before conversion, with some patients not developing any new squamous cell carcinomas; however half of the patients had to discontinue everolimus (Euvrard et al. 2010).
The literature is convincing that PSI/mTOR inhibitors have an effect on reducing malignancies and finding the best time to introduce and best dose to treat cancers is research that needs to come.
The introduction of new immunosuppression agents were the key to allowing heart transplantation to succeed. There is still not uniform agreement about the optimal immunosuppression protocol, but perhaps the options allow for individualized immunosuppression, whether it includes a PSI/mTOR for its antiviral properties in a patient who has had recurrent CMV infections or a history of skin cancers. The need for a better immunosuppression regimen is there when looking at 1-year and 5-year survival rates; is there a regimen that will get more people to 5 years after heart transplantation? There are additional agents being investigated (see separate chapter) that will fit into contemporary heart transplantation immunosuppression protocols, but where and replacing what are to be determined.
- Cantarovich M, Besner HG, Barkun JS et al (1998) Two-hour cyclosporine level determination is the appropriate tool to monitor neoral therapy. Clin Transpl 12(3):243–249Google Scholar
- Salgo R, Grossman J, Schofer H et al (2010) Switch to sirolimus based immunosuppression in long-term renal transplant recipients: reduced rate of (pre-) malignancies and nonmelanoma skin cancer in a prospective, randomized, assessor-blinded, controlled clinical trial. Am J Transplant 10:1395–1393CrossRefGoogle Scholar
- van Gelder T, Klupp J, Barten MJ et al (2001) Comparison of the effects of tacrolimus and cyclosporine on the pharmacokinetics of mycophenolic acid. Ther Drug Monit 23(2):19–128Google Scholar