Background

Iron deficiency anemia: prevalence and etiology

Iron deficiency (ID), with or without anemia (iron deficiency anemia or IDA), represents a major global health problem affecting more than 2 billions people worldwide [1, 2], mainly because of poverty and malnutrition in developing countries. Individuals with increased requirement of the micronutrient, like preschool children, adolescents during the growth spurt, and women of childbearing age, are at the highest risk [3]. Nonetheless, IDA is also frequent in western countries, with a prevalence ranging from 4.5 to 18% of the population, where elderly with multimorbidity and polypharmacy represent an adjunctive subcategory at high risk [4].

IDA can be due to a wide range of different causes (summarized in Table 1), which can roughly grouped into three major categories: imbalance between iron intake and iron needs, blood losses (either occult or overt), and malabsorption. The coexistence of multiple causes or predisposing factors is not uncommon in certain patients, particularly those with severe and/or recurring IDA [1, 5], and in the elderly [4]. Complex overlap of different mechanisms can occur in the individual patient. Just as an example, gastrointestinal angiodysplasia represent a relatively frequent cause of occult bleeding in the elderly [6], which can be difficult to diagnose when localized in the small bowel unless wireless capsule endoscopy is performed. Angiodysplasia often associate (in 20–25% of cases) with calcific aortic stenosis, giving rise to the so-called Heyde’s syndrome [7]. Such syndrome includes an acquired coagulopathy further favoring bleeding from angiodysplasia, because of consumption of high molecular weight von Willebrand factor multimers during flux through the stenotic valve [7].

Table 1 Main causes of IDA

Treatment of IDA is based on two cornerstones. Recognition and management of the underlying cause(s) is mandatory, whenever possible. In the meantime, iron has to be reintegrated by selecting the most appropriate compound and route of administration in each individual patient.

Pathophysiological advances in iron metabolism

Iron, a micronutrient essential for life, is particularly important for an adequate production of red blood cells (RBCs), which deliver oxygen to all body’s tissues. RBCs represent by far the most numerous cells of the human body. An adult man is made of near 30 trillions (3 × 1013) cells, excluding the microbiome [8]. RBCs account for near 84% (24.9 trillions) of total cells, and are produced with a rate of near 200 billions per day, i.e. near 2.4 millions per second. Such an impressive activity requires a daily supply of 20–25 mg of iron to erythroid precursors in the bone marrow [9]. The body iron content (~ 4 g in the adult male, ~ 3 g in the female) must be kept constant, to avoid either deficiency or overload, which can also be detrimental by facilitating the production of toxic reactive oxygen species [10]. In the recent years, enormous progresses have been made in understanding of the mechanisms regulating iron homeostasis at both cellular and systemic levels, so that our era has been defined “the golden age of iron” [11]. The turning points have been the discoveries of hepcidin [12,13,14], ferroportin [15,16,17], and their interaction [18], at the beginning of this century (history reviewed in detail elsewhere [19]). Hepcidin is a small cysteine-rich cationic peptide made of only 25 amino acids [20]. It is synthesized primarily by the hepatocytes, which accounts for the first part its name (“hep-”). The rest (“-cidin”) lies on the fact that it was originally discovered, partly by chance [19], during research focusing on defensins, i.e. naturally occurring peptides with antimicrobial activity [21]. Indeed, hepcidin retains some degree of antimicrobial activity, but this is exerted only indirectly, i.e. by subtracting iron to invading pathogens (see below). Hepcidin critically regulates systemic iron homeostasis by binding to its receptor ferroportin, the only known channel for exporting iron out of cells. Ferroportin, a multidomain transmembrane protein, is highly expressed in cells critical for iron handling like: (1) duodenal enterocytes, involved in absorption of dietary iron; (2) splenic red pulp macrophages, involved in iron recycling from senescent erythrocytes; and (3) hepatocytes, involved in iron storage. Hepcidin binding determines ferroportin internalization and degradation [18], thereby decreasing iron fluxes into the plasma through inhibition of both iron absorption and recycling. Of note, systemic iron homeostasis is highly conservative and “ecologic” (Fig. 1). Under physiological conditions, RBCs contain the largest proportion of body iron (i.e. near 2 g), and the 20–25 mg of iron needed for daily production of new RBCs derive almost totally from continuous recycling of the element through the phagocytosis of senescent erythrocytes. Only a minimum amount of iron (i.e. 1–2 mg, less than 0.05% of total body iron) is lost every day through skin and mucosal exfoliation, plus menses in fertile women. Such losses are obligated and there is no physiological way to regulate iron excretion. Hence, homeostasis of total body iron amount is maintained by regulating intestinal absorption in order to precisely match the losses, i.e. by absorbing just 1–2 mg/day of iron out of the 10–15 mg contained in an average western diet.

Fig. 1
figure 1

Essentials of systemic iron metabolism. Systemic iron metabolism is highly conservative of total body iron content (3–4 g), through the continuous recycling of iron from the senescent erythrocytes by splenic macrophages, which supplies the 20–25 mg/day of iron needed for bone marrow hematopoiesis (thick red arrows). Both iron deficiency and iron overload are detrimental and have to be avoided. Total body iron homeostasis is maintained by accurately matching unavoidable daily losses with intestinal absorption of dietary iron, (1–2 mg/day) (thin blue arrows). The master regulator is hepcidin, which neutralize ferroportin (black dotted arrows), i.e. the only known cell membrane iron exporter mainly expressed by macrophages and on the basolateral membrane of absorbing intestinal cells. Hepcidin production is stimulated by high iron concentration in tissues (via BMP6) and in the circulation (via saturated transferrin), as well as by pro-inflammatory cytokines. On the other hand, it is suppressed by iron deficiency, hypoxia, and increased erythropoiesis (see also the text)

The hepcidin/ferroportin axis is finely tuned to ensure the balance between erythropoiesis need and iron absorption. The regulation of hepcidin and ferroportin expression at molecular level is quite complex, and its description is beyond the scope of this article (for comprehensive reviews see [22,23,24]). From a clinical standpoint, hepcidin production is modulated by a number of physiological and pathological conditions that can exert opposite influences [25]. The three major determinants are body iron stores, erythropoietic activity, and inflammation [24] (Fig. 1). Hepcidin synthesis by hepatocytes is stimulated when body iron stores are replete, mainly through a paracrine release of Bone Morphogenetic Protein 6 (BMP6) [26, 27]. Indeed, BMP6 is produced by liver sinusoidal cells [28] in response to increased transferrin saturation [22], and stimulates the BMP/Small Mother Against Decapentaplegic (SMAD) signaling pathway critically involved in transcriptional regulation of hepcidin [26]. On the other hand, hepcidin is markedly suppressed in iron deficiency [29, 30] to ensure maximal absorption of iron from the gut. Hepcidin is also negatively regulated by erythropoietic activity in the bone marrow. For example, after an acute blood loss hepcidin is suppressed in order to match the increased iron need of erythroid precursors for rapid production of new RBCs. In murine models, a hormone named erythroferrone (ERFE) has been identified as the hepcidin suppressing agent produced by erythroblasts [9, 31]. In humans, the ERFE ortologue encoded by the gene FAM132B seems also involved in hepcidin suppression under conditions of increased erythropoiesis [32, 33], although in combination with other factors still poorly characterized [23]. Finally, inflammation strongly stimulates hepcidin synthesis through several interleukins (IL), mainly IL-6 [34] and IL-1β [35]. In acute inflammatory conditions hepcidin release from hepatocytes increases rapidly (within few hours) and exponentially (by more than 10–40-folds) [36, 37]. The ensuing rapid hypoferremia represents a protective factor in several acute infections, by subtracting iron to invading microbial agents avidly requiring the element for their growth [38,39,40]. On the other hand, hepcidin-induced iron sequestration into macrophages leads to iron-restricted erythropoiesis, a major driver of the “anemia of inflammation”, and, in the long term, of the “anemia of chronic diseases” [41, 42]. It is now increasingly clear that the hepcidin/ferroportin axis also critically influence the response to iron treatments.

Historical considerations

Iron therapy dates back to the seventeenth century, when Thomas Sydenham (1624–1689) first proposed the use of oral iron salts for the treatment of “chlorosis”, although the disorder was initially believed as an hysterical problem rather than due to IDA [43]. The first iron compound to be used for intravenous (IV) route (iron saccharide) entered the clinical scenario near to the second half of the past century [44]. Unfortunately, both oral and IV traditional iron formulations are known to be far from ideal, mainly because of tolerability and safety issues, respectively. From a pharmacological point of view, iron replacement therapy has progressed relatively slowly until recently. At the beginning of this century, concomitantly with the pathophysiological advances mentioned above, improvements in the pharmaceutical technologies have allowed the production of newer iron formulations, particularly for IV administration, aimed at minimizing the problems inherent with traditional compounds. Noteworthy, the pharmacokinetic of oral iron is completely different from that of IV iron (Fig. 2). Oral iron is incorporated into plasma transferrin after release from the basolateral membrane of intestinal cells, providing that no condition leading to malabsorption (i.e. celiac disease, autoimmune or HP-related chronic gastritis) is present [5]. By contrast, IV iron compounds are first taken up by macrophages and then released into the bloodstream. As the two treatments cannot be considered merely interchangeable and have different indications, we will examine them separately.

Fig. 2
figure 2

Different pharmacokinetic between oral and IV iron, revisited in the hepcidin era. Pharmacokinetic of oral iron requires the integrity of the mucosa of the stomach (acidity is needed to solubilize iron) and duodenum/proximal jejunum (where most of iron is absorbed). This integrity can be compromised by several conditions leading to malabsorption (see Table 1). The maximum absorption capacity during oral iron treatment is estimated to be near 25–30 mg/die, i.e. near ten to twenty-fold the typical daily absorption of dietary iron in steady-state condition (1–2 mg). Unabsorbed iron is mainly responsible of gastrointestinal adverse effects (AEs). IV iron has a completely different pharmacokinetic that circumvents these problems. The iron-carbohydrate complexes (see Fig. 3 for details) are rapidly taken up by macrophages, then iron atoms of the core are slowly released in the circulation through ferroportin. Both oral and IV iron requires ferroportin to be released in the plasma. Hepcidin production is typically suppressed in uncomplicated IDA, allowing maximal iron absorption. However, slightly elevated (or even inappropriately normal) hepcidin levels appear sufficient to inhibit intestinal ferroportin. This can be due to a genetic disorder (IRIDA), to concomitant low-grade inflammation (i.e. in chronic heart failure), or even to transient stimulation after a first dose of oral iron. This constitutes the basis for current recommendation of using oral iron on an alternate day schedule instead of the classical daily schedule (see the text for details). On the other hand macrophage ferroportin, whose expression is much higher than at the intestinal level, requires much more elevated hepcidin levels (i.e. like during acute inflammation) to be substantially suppressed

Oral iron therapy

Easy, cheap and often effective (not always)

Oral iron represents the mainstay of IDA treatment, being easy, cheap, and effective in the majority of mild to moderate cases, i.e. when hemoglobin is ≥ 11 g/dl, or ≤ 10.9 but ≥ 8.0 g/dl, respectively, according to the WHO (http://www.who.int/vmnis/indicators/haemoglobin/en). Indeed, in severe IDA (hemoglobin < 8.0 g/dl), whatever the cause, there is an increasing agreement on the use of new IV iron compounds (see below) as first-line therapy, because of their superior efficacy and rapidity [45]. Patients severely symptomatic, e.g. those with hemodynamic instability or signs of myocardial ischemia, need RBC transfusion, although such clinical presentation is uncommon in IDA.

A complex market scenario

Iron-containing oral preparations currently available in the market are innumerous, with a variety of pharmaceutical forms including pills, effervescent tablets, elixir, and so on. Their chemistry is also heterogeneous, including either trivalent (Fe3+, or ferric) or divalent (Fe2+, or ferrous) iron, in form of iron salts or iron polysaccharide complexes [46]. Many preparations are over the counter and often aggressively advertised as “ideal”, or the “most natural” way to reintegrate ID. Pharmacological iron is absorbed through the same pathway of non-heme dietary iron found in plant foods, which is exceedingly less efficient than absorption of heme–iron found in meat [47, 48]. Non-heme dietary iron is largely ferric, and, as such, highly insoluble. To be absorbed, it needs to be reduced by a brush-border ferrireductase (duodenal cytochrome b or DCYTB), allowing the resulting divalent iron to enter the luminal surface of enterocytes through a specialized transporter (Divalent Metal Transporter 1, or DMT1). By contrast, the absorption of heme–iron is less well understood [48], and attempts to produce heme–iron polypeptides has resulted in greater costs and insufficient clinical evaluation [49]. Thus, for the moment divalent iron salts appear the most appropriate form of oral iron replacement therapy, the most used being ferrous gluconate, ferrous fumarate, and ferrous sulfate (FS) (Table 2). In particular, FS represents the universally available compound considered by all guidelines [50, 51].

Table 2 Traditional oral iron preparations (ferrous iron salts)

The traditional prescription. Be aware of elemental iron. Replace stores, not only hemoglobin

What really matter in different divalent oral iron preparations is the content of elemental iron. As a general rule, oral iron preparations do not contain more than 30% of elemental iron, but a source of confusion is represented by the fact that such proportion can vary by manufacturer, as well as in different countries. For example, typical FS tablets of nominal 325 mg salt, contain 65 mg of elemental iron in the US, and 105 mg in Europe. Other iron salt tablets, i.e. ferrous gluconate usually contain less elemental iron by weight (for example 28 mg/256 mg, 38–48 mg/325 mg) (Table 2). As the classically recommended daily dose for IDA treatment is 100–200 mg of elemental iron, physicians should always check this content before prescribing any preparation, including liquid ones like syrup, elixir, and drops. The most popular prescription for IDA is 2–3 tablets per day of FS, which should be assumed preferably on an empty stomach to maximize absorption [52]. Such relatively high doses are mainly based on traditional practice, and recent recalculations suggest they are likely excessive (see below). Noteworthy, only a minor fraction (no more than 10–20%) of a high dose of oral iron is effectively absorbed [52,53,54]. As absorption of ferrous salts is favored by a mildly acidic medium, ascorbic acid 250–500 mg/day is often concomitantly prescribed, although formal demonstration of a measurable advantage is lacking [55]. On the other hand, antacids, including proton pump inhibitors (PPI), are likely a cofactor of insufficient response to oral iron, particularly in certain populations like elderly patients assuming polypharmacy [4]. Optimal response to oral iron is generally defined as an hemoglobin increase of 2 g/dl after 3 weeks. However, the final goal of the treatment has to be not only the normalization of hemoglobin, but also repletion of iron stores, with an ideal target of ferritin > 100 μg/l [49, 51]. This often requires a prolonged treatment for at least 3 months [50], if not more. Stopping the treatment too early is a common error in clinical practice. In our experience at a referral center for iron disorders, this is particularly frequent in young premenopausal women, resulting in significant morbidity and risk of recurrence. By contrast, an hemoglobin increase less than 1 g/dl after 3 weeks notwithstanding adequate compliance defines “refractoriness” to oral iron. This should prompt appropriate investigations to exclude malabsorption, i.e. due to celiac disease, helicobacter pilory infection, or autoimmune gastritis [5].

A far from ideal treatment

Unfortunately, oral iron is frequently associated to adverse effects (AEs), mainly represented by gastrointestinal disturbs including metallic taste, nausea, vomiting, heartburn, epigastric pain, constipation, and diarrhea. They are likely due to direct toxicity of unabsorbed iron on the intestinal mucosa. Two recent meta-analyses including several thousands of patients receiving ferrous iron salts have reported gastrointestinal AEs in proportion variables from 30 to 70% of cases [56, 57]. The ensuing reduction of adherence, in combination with the need of prolonged treatment (see above), results in undertreatment of a significant proportion of IDA patients in daily clinical practice. In recent years, there has been an increasing awareness of a previously overlooked potentially negative effect of oral iron, i.e. the change in gut microbiome [58,59,60]. This is due to unabsorbed iron reaching the colon, and appears particularly detrimental in low-income populations. Studies in Kenyan infants consuming iron-fortified meals have documented a decrease of beneficial commensal bacteria (i.e. bifidobacteria and lactobacilli, requiring little or no iron), and an increase of enterobacteria, including iron-requiring enteropathogenic Escherichia coli strains [61]. Noteworthy, previous studies with iron fortification formulas in African children have raised serious concerns about the safety of indiscriminate iron supplementation in areas where either micronutrient deficiencies or infections are highly prevalent [62, 63].

Oral iron therapy in the hepcidin era

Optimal efficacy of oral iron requires not only the integrity of the gastrointestinal mucosa, but also an appropriate suppression of hepcidin to allow full activity of ferroportin expressed on the basolateral surface of enterocytes (Fig. 2). As mentioned before, in typical IDA serum hepcidin levels are extremely low, or even undetectable [29, 30]. Nevertheless, serum hepcidin levels reflect the balance of multiple opposing influence [25], and exception to this rule can be due to genetic or acquired conditions. Subjects homozygous for mutations in TMPRSS6, encoding the hepcidin inhibitor Matriptase-2, are affected by a rare genetic form of anemia named Iron Refractory Iron Deficiency Anemia (IRIDA, OMIM #206200) [64]. This condition should be suspected in patients presenting with IDA early in life, and poor or no response to oral iron without apparent cause [65, 66]. Indeed, the biochemical hallmark of IRIDA is the presence of high (or inappropriately normal) serum hepcidin levels [25], which are pathogenetically relevant. Relatively high or inappropriately normal hepcidin levels can be found also in ID patients with a concomitant inflammatory disorder [67, 68]. The best studied condition in this sense is chronic heart failure (CHF), where low-grade chronic inflammation is known to play an important role [69]. ID is quite common in CHF, involving at least 30% of patients [70, 71]. Multiple factors concur to determine ID in CHF, including decreased iron intake because of anorexia, malabsorption due to edema of the intestinal mucosa, and, possibly, occult bleeding favored by concomitant assumption of antithrombotic drugs. This results in decreased utilization of O2 by iron-dependent mitochondrial enzymes in cardiomyocytes [72], and an increased risk of hospitalization or even of death [70, 73]. Of note, IV iron therapy has been shown to be beneficial in CHF patients with ID (see also below) [71, 74, 75], while a recent trial with oral iron yielded negative results [76]. This refractoriness to oral iron was linked to relatively high (instead of suppressed) hepcidin levels in CHF ID patients [76]. Thus, CHF represents a paradigmatic condition where recent pathophysiological insights on hepcidin appear to influence the choice on the most appropriate route of iron administration. The same concept was previously suggested by two retrospective studies in cancer patients [77], and in unselected IDA patients [78], where baseline hepcidin levels predicted subsequent responsiveness to oral iron. After initial technical difficulties, hepcidin assays are continuously improving, and a number of them showing good accuracy and reproducibility have been internationally validated [25]. However, lack of harmonization (i.e. comparability between absolute values obtained in different laboratories) has prevented until now the definition of universal ranges for widespread clinical use [25, 79]. This problem is now nearly solved through the use of a commutable reference material, which will soon allow worldwide standardization and results traceable to SI units (Swinkels D. W., personal communication). Thus, in a near future baseline hepcidin measurement in IDA could actually help in tailoring iron therapy, by selecting the optimal route of administration in a given individual [25]. This would be particularly useful in certain “difficult” populations, such as the elderly [4] and children in developing countries [80].

An already established major advance in oral iron therapy deriving from hepcidin discovery is related to the administration schedule. An elegant pilot-study by Moretti and colleagues on non-anemic ID premenopausal women suggested that giving oral iron on an alternate day schedule might be as effective as the classical daily schedule based on divided doses [81]. The classical schedule was associated to a rapid response in hepcidin production that limited the absorption of a second dose given too early. By contrast, the alternate day regimen allowed a sufficient time for hepcidin return to baseline, hence maximizing fractional iron absorption. Moreover, it minimized gastrointestinal AEs. Such results have been recently confirmed by two prospective randomized controlled trials with similar design, again showing better absorption in non-anemic young women taking iron on alternate day [82]. Whether or not these results also apply to anemic patients with ID remain formally unproven, but it appears reasonable to assume that this will be the case. Indeed, some Authorities now consider such results sufficient to recommend the alternate day regimen as the preferable way for oral iron replacement therapy in IDA, although with prudence (grade 2 C) [83].

New preparations

Another area of active work is the search for new oral preparations as effective as the standard FS, but better tolerated. One of the most innovative preparation is “sucrosomial” iron (SI), that is a source of ferric pyrophosphate covered by phospholipids plus sucrose esters of fatty acids matrix [84]. In vitro experiments on human intestinal Caco-2 cells suggest that SI could be taken up through a DMT-1 independent mechanism [84], possibly through endocytosis and similarly to what happens with nanoparticles [85, 86] (see below). Whether this occurs also in vivo, as well as whether SI utilizes unique mechanisms also to enter the bloodstream remain to be proven. Anyway, some preliminary clinical studies with SI look promising. For example, a small randomized, open-label trial in non-dialysis chronic kidney disease (CKD) patients with IDA showed that low dose (30 mg/day) oral SI for 3 months was non-inferior to IV iron gluconate with regards to hemoglobin recovery [87]. On the other hand, IV iron remained superior with respect to replenishment of iron stores, while IDA recurred rapidly (after 1 months from suspension) in patients treated with oral SI. This study had several limitations, including the small sample size and the IV compound used as comparator (iron gluconate), which is now clearly surpassed by the newer IV iron formulations (see below). Nevertheless, the relatively good response obtained after 3 weeks with oral SI is intriguing, particularly considering the low dose (30 mg/day), as well as the peculiar IDA population studied (stage 3–5 CKD). Indeed, late stage CKD patients often have elevated hepcidin levels [88], so that some Authors have recently claimed hepcidin as a sort of new “uremic toxin” [89]. To explain such result, one should assume that absorption of oral SI is: (1) poorly affected by hepcidin inhibition; (2) nearly maximal, by contrast with traditional iron compounds, as it is well known, for example, that no more that 30 mg iron are absorbed when 100 mg elemental FS are administered (see also above). Of note, tolerability of oral SI was excellent. Such findings deserve further mechanistic studies on this novel compound, as well as confirmation by larger clinical trials in other IDA populations. Nevertheless, they pose a key general question regarding the optimal dose of oral iron, whatever the compound. When hepcidin is suppressed, as in common uncomplicated IDA, iron absorption is supposed to be maximal, but with a limit of 25–30 mg/day. Fractional absorption can vary substantially between different compounds, but what is clear is that most AEs are related to the unabsorbed fraction. A small study in elderly (> 80 years) with IDA conducted more than 10 years ago, showed that two low-dose schedules (15 or 50 mg/day of elemental iron in form of liquid ferrous gluconate) were equally effective as high “traditional” dose (150 mg/day of elemental iron in form of ferrous calcium citrate tablets) [90]. Low-doses also resulted in significantly lower AEs [90]. Overall, the new studies cited above, including those on alternate day regimen, suggest that it is time to reconsider our traditional way of giving oral iron. Large-scale studies on patients with mild to moderate uncomplicated IDA should definitively explore the efficacy and tolerability of alternate day low-dose oral iron.

A further active field of research, particularly to address IDA in developing countries, is the possibility of enriching food with iron nanoformulations [91, 92]. Early attempts using classical iron salts were disappointing, even because of changes in color and taste of food. Nanoparticles (NPs), including ferritin or ferritin-mimicking molecules [93], appear promising, and devoid of unwanted effects. Noteworthy, a very recent and elegant study has demonstrated that iron-containing NPs cross the cell membrane by DMT1-independent mechanisms, like endocytosis, or even by a non-endocytotic pathway allowing direct access to the cytoplasm [94].

IV iron

Historical considerations

First attempts to administer iron through the parenteral route date back to the first half of the past century [95], but were unacceptably painful when administered intramuscularly, and caused serious hemodynamic toxicity attributed to rapid release of labile-free iron. This led to the development of carbohydrate shells surrounding an iron core, in order to limit the unwanted rapid release of the element [96]. The first preparation to be used was iron saccharide in 1947, followed by High-Molecular Weight Dextran (HMWD) iron (Fig. 3a). Despite documented success in correcting IDA [97, 98], rare cases of severe hypersensitivity reactions were reported, some of them being fatal. This led to extreme caution in prescribing IV iron, which was deemed to be reserved only for conditions where oral iron could not be used [98]. In facts, the medical community experienced a long lasting generalized prejudice against IV iron, whatever the preparation used. Only relatively recently, it was realized that severe and potentially lethal reactions were almost exclusively due to HMWD-iron [99], which, in the meantime, was no longer produced since 1992 [100] and replaced by other preparations (Fig. 3a). Indeed, a retrospective analysis of > 30 million doses of IV iron reported absolute rates of life-threatening AEs of 0.6, 0.9, and 11.8 per million with iron sucrose, ferric gluconate, and HMWD-iron, respectively [99].

Fig. 3
figure 3

IV iron preparations: historical and chemical perspective. a A widespread use of IV iron has been historically hampered by unacceptable risks with early preparations, particularly anaphylactic reactions with high molecular weight dextran (HMWD) iron. b All IV iron preparations consist of an iron polynuclear core surrounded by a carbohydrate shell that acts as a stabilizer, preventing uncontrolled release of toxic free iron. What is different in various compounds is the identity of the carbohydrate moiety, which is unique for each compound and influences both immunogenicity and strength of stabilization (see also the text)

The chemistry of different IV iron preparations and its relation with adverse effects

All IV iron preparations share a common structure, being colloidal solutions of compounds made of a polynuclear core containing Fe3+ hydroxide particles surrounded by a carbohydrate shell (Fig. 3b). The most important differences between one preparation and another rely on the chemistry of the carbohydrate moiety forming the shell, as well as on the type and strength of its bonds with the iron core (Fig. 3b). These features are major determinants of the stability of the iron/carbohydrate complex, which in turn constitutes the factor limiting the maximum dose of iron administrable with a single infusion. As depicted in Fig. 2, the pharmacokinetic of IV iron is substantially different from that of oral iron. Once injected in the bloodstream (hence circumventing problems in intestinal absorption), IV iron is mainly taken up by macrophages, which subsequently release the element through ferroportin [101]. However, less stable complexes can release variable amount of ferric iron directly in the circulation before macrophage uptake. This leads to the presence of toxic labile-free iron, once the binding capability of transferrin is saturated. As a general rule, the more stable is the complex, the less will be the frequency of infusion reactions, which are usually mild, consisting of rash, palpitations, dizziness, myalgias, and chest discomfort in variable combination but without hypotension or respiratory symptoms [49]. Such minor infusion reactions occur roughly in 1:200 administrations [102], and resolve quickly by simply stopping the infusion without the need of any other treatment. Of note, there is increasing consensus on avoiding anti-histaminic drugs in such conditions, with particular reference to diphenhydramine, as it could determine hypotension and other unwanted side effects, leading to paradoxical aggravation rather than amelioration of the clinical picture [49, 103]. More serious “anaphylactoid” reactions, including hemodynamic and respiratory changes, can also occur with virtually all IV iron preparations, including the newer ones (see below), but appear exceedingly rare (see below). At variance with serious AEs with old compound like iron-HMWD, for whom in some cases a IgE-mediated mechanisms could be demonstrated [104, 105], such reactions are now increasingly attributed to complement activation-related pseudo-allergy (CARPA) [106]. This mechanism is thought to be activated by iron nanoparticles [106], thus, again, the stability of the carbohydrate shell and the different physico-chemical properties of the various compounds are likely critical. Anyway, a recent systematic meta-analysis of 103 trials including more than 10,000 patients treated with various IV iron preparations and more than 7000 patients treated with different comparators (oral iron, placebo) found serious AEs with IV iron extremely rare (< 1,200,000 doses), and no more frequent than comparators [107]. No fatal reaction or true anaphylaxis was reported. Only minor infusion reactions were consistently reported, with a RR = 2.47 for any IV iron preparation [107]. These results are particularly relevant if put in perspective with the frequency of AEs leading to major morbidity occurring with RBC transfusions, the only alternative to IV iron in certain circumstances. Indeed, such frequency appears higher than that related to IV iron, being estimated at a rate of near 1:21,000 [108].

The newer IV iron preparations

In the last decade three new preparations have entered the clinical scenario, which can be collectively grouped as “third generation” IV iron compounds (Fig. 3). These preparations share common features conferring superiority over the older products, so that they are relevantly changing the way we manage IDA in clinical practice. Such favorable features, as compared to traditional preparations, are illustrated in Table 3. The most important one is represented by the higher stability of the carbohydrate shell, which in turn allows to give the total replacement dose (usually 1–1.5 g, depending on the degree of anemia and body weight, according to the classical Ganzoni’s formula [109]) in just one or two infusions. Such an easy schedule is clearly more comfortable for patients than classical multiple infusion schemes, i.e. up to 7–10 infusions on consecutive days with ferric gluconate. The increasing costs per vial appear counterbalanced by reduced costs in terms of personnel, and in-hospital organization [110]. Moreover, the higher stability also results in a very good safety profile because of reduced probability to release of free iron. An extensive review on the use of single-dose IV iron is found elsewhere [111].

Table 3 Currently used IV iron preparations

Iron isomaltoside is, for now, available only in few European nations, while Ferumoxytol and Ferric carboxymaltose (FCM) are becoming increasingly popular worldwide.

A peculiarity of Ferumoxytol consists on its core made of superparamagnetic iron oxide NPs, surrounded by low molecular weight semisynthetic carbohydrates. This compound was originally developed as enhancing agent for magnetic resonance imaging (MRI), and it is sometimes still used for this purpose [112]. Thus, while its administration for IDA does not definitively compromise the interpretation, should a patient undergo MRI within 3 months of administration, the radiologist should be notified [83].

Ferric carboxymaltose (FCM) is characterized by a tight binding of elemental iron to the carbohydrate polymer shell. This is consistent with studies on labile-free iron release of different preparations, showing the lowest levels with FCM [113]. Because of its high stability, FCM can be quickly administered at high dose, up to 1000 mg of elemental iron in 15 min [114]. Hypophosphatemia is not infrequent in the following days after FCM infusion [115], with a mechanism that appears peculiar to its unique carbohydrate moiety [116]. Indeed, FCM is able to induce the synthesis of fibroblast growth factor 23 (FGF-23), an osteocyte-derived hormone that regulates phosphate and vitamin D homeostasis [117], ultimately leading to increased renal excretion of phosphate [115]. However, FCM-induced hypophosphatemia does not appear clinically meaningful, as it is mild, transient, and without symptoms or sequelae. Serum phosphate levels do not need to be checked or monitored, with the only possible exception of patients with IDA in the context of severe malnutrition, where baseline phosphate levels could be already reduced. On the other hand, FCM has been successfully and safely used in a number of clinical settings, including CHF [74, 75], CKD [118], inflammatory bowel disease [119, 120], heavy uterine bleeding [121], as well as during pregnancy in the second and third trimester [122, 123].

Expanding spectrum of clinical use of IV iron

The availability of novel IV iron preparations safe and easy to use is gradually abating prejudices and misconceptions regarding this therapeutic approach [45]. From historical restrictions considering IV iron as “an extreme solution for severe IDA when other options were impracticable” [98], now there is a number of conditions where its use is well-established or increasingly considered as first-option approach [111] (Table 4). At variance with oral iron, compliance is assured and correction of IDA is more rapid without need of prolonged administration. From a pathophysiological point of view, the use of IV iron in CHF is of particular interest. As noted above, CHF patients are predisposed to develop ID or even IDA, but a precise laboratory definition of ID in this setting is difficult because of the interference of subclinical inflammation. This tends to increase serum ferritin above the threshold levels (< 15–30 μg/L) typical of uncomplicated IDA. Anker and colleagues first proposed wide and pragmatic criteria for defining ID in CHF, i.e. serum ferritin levels < 100 μg/L, or even < 300 μg/L if transferrin saturation is concomitantly < 20% [74]. Notwithstanding skepticism in the hematological community about this “extensive” definition, IV iron supplementation with FCM in CHF patients was effective in ameliorating either iron stores or cardiac function [74]. Such results have been consistently replicated [75, 124], so that most recent and authoritative guidelines suggest to systematically check for ID in CHF and IV iron treatment if ID is present [125,126,127]. Noteworthy, a sub-analysis of the seminal FAIR-HF trial [74] focusing on anemic CHF patients showed consistent Hb increase after IV iron, implicitly validating the above mentioned “extensive” criteria for ID [128]. As mentioned above, the beneficial effect of iron in CHF is seen only using IV preparation (for now FCM is the only compound tested in such condition), while a recent trial with oral iron was unsuccessful because of increased hepcidin levels [76]. As depicted in Fig. 2, IV iron also needs a permissive effect of ferroportin to be delivered to circulating transferrin and ultimately to erythroid precursors in the bone marrow. However, it enters the circulation mainly through macrophage ferroportin, at variance with oral iron that uses intestinal ferroportin (Fig. 2). According to most recent basic studies [129, 130], it is increasingly recognized that there are substantial quantitative differences between overall expression of ferroportin at intestinal level (normally dealing with low amount of iron, ~ 1–2 mg/die), as compared to the macrophage level (normally managing much more iron, ~ 20–25 mg/die). In other words, the amount of hepcidin needed for blocking iron absorption is likely much lower than that required for inhibiting ferroportin expressed by the innumerous iron recycling macrophages. Hence, the clinically confirmed success of IV iron in CHF represents an interesting paradigm in modern iron replacement therapy. Indeed, CHF recapitulates the features of a condition where ID is common, difficult to define by classical biochemical iron parameters, but effectively treatable with iron, although only using the IV route, being the hepcidin increase driven by low-grade inflammation sufficient to inhibit intestinal (but not macrophage) ferroportin.

Table 4 Indications for IV iron

Whether this applies to other chronic conditions where inflammation and ID often coexist remains to be determined, and active clinical research is needed [131]. A further novel example in this sense could be represented by chronic obstructive pulmonary disease (COPD), where a previously unrecognized detrimental role of ID would merit adequate addressing in the future [132].

Conclusions

The recent advances in pathophysiology of IDA, along with the availability of new iron preparations and the awareness of the implications of ID in a variety of clinical fields, is making iron replacement therapy more feasible and fascinating than ever. We are hopefully approaching a new era where we could eventually contrast more effectively the most important nutritional deficiency worldwide.