Computational modelling has been used for decades in the study of cardiovascular diseases and their treatment. Especially with advances in computing power and 3D medical imaging, cardiovascular models are being recognized as an essential step in the regulatory process, such as with FDA’s Critical Path Initiative29 and EU’s Avicenna Roadmap,3 and are already making inroads into clinical practice, e.g., Heartflow®, the first FDA-approved cardiovascular modelling tool.10

This explosion of cardiovascular modelling is undoubtedly a good thing, but questions are still being raised, especially by non-expert target end users, about the lack of standardized modelling protocols, the many simplifying assumptions, and the incremental value in light of physiological or clinical variabilities. These may be seen to fall, respectively, under the terms Verification, Validation, and Uncertainty Quantification (UQ):

Verification is performed to determine if the computational model fits the mathematical description. Validation is implemented to determine if the model accurately represents the real world application. Uncertainty quantification is conducted to determine how variations in the numerical and physical parameters affect simulation outcomes.2

The terms ‘verification’ and ‘validation’ are often used loosely and even interchangeably, so we like to point to Roache’s pithy definition of the former as “solving the equations right” and the latter as “solving the right equations”.24 But on closer look, what does it mean to validate, say, a “patient-specific” model for (eventual) clinical use? Comparison of CFD results to in vitro measurements of the flow of a blood-mimicking fluid is considered validation, but one could also argue that it is just proving you are solving the Navier–Stokes and continuity equations correctly; it does not necessarily mean they are the right or only equations if there are rheological, compliance, or autoregulatory effects not captured in the model.28 But perhaps this is just a matter of semantics, since in vivo measurements, the nominal gold standard, are themselves a filtered version of reality (e.g., Cibis et al.9 and Ford et al.17).

There is also growing recognition in the cardiovascular modelling community of the uncertainties of the anatomical or physiological parameters that we require and use as input, and which belie the usually-precise depictions of our model outputs. These uncertainties may take the form of imprecisely known input parameters or, especially in patient-specific models, parameters that are inherently variable, like heart rate. If such input “noise” in a cardiovascular model means that its outputs are not statistically significantly better than a clinical standard measurement, it is unlikely to be used, especially if it does not come at any cost savings.

While there are strict guidelines for proper verification and validation of computational models1 and mathematically robust procedures for UQ,15 these can appear daunting to many researchers, who then either ignore or avoid them. Consider this analogy to the clinic: a drug with a complex regimen may prove to be efficacious under the carefully controlled conditions of a clinical trial, but might not be effective under real-world conditions where patients might forget or mistake their doses.

Our argumentation, and indeed the theme of this special issue, is that for VVUQ to be effective in the real world, we must be willing to embrace attempts that may be ad hoc or underpowered. This echoes the editorial policy of the ASME Journal of Fluids Engineering regarding verification and validation: “any appropriate analysis is far better than none as long as the procedure is explained”.21 As we detail below, this special issue of CVET highlights a wide range of approaches to VVUQ, over a wide range of cardiovascular modelling applications in both fluid and solid mechanics. Several of the papers also provide links to online datasets to encourage others to perform their own VVUQ, towards an ultimate goal of promoting standardized datasets for the cardiovascular modelling community.

The first paper in our special issue, by Valen-Sendstad et al.,30 highlights results from the 2015 International Aneurysm CFD Challenge. Unlike previous such challenges where participants were provided with the already-segmented “patient-specific” geometry and/or the flow boundary conditions, here the 26 participating teams were provided only with source 3D images, with the goal of quantifying the total or “real-world” variability in contemporary image-based aneurysm CFD. Variabilities of the segmented lumens and assumed inlet flow rates were particularly high, resulting in a lack of consensus for several hemodynamic indices nominally associated with aneurysm rupture, even after accounting for team experience. The authors have generously provided all of the team CFD models and results online, to allow others to explore this rich dataset.13

The findings of the above study are echoed in the first results from the follow-on Multiple Aneurysms AnaTomy CHallenge (MATCH) 2018 study, where teams were also provided only with 3D images. In this paper, by Berg et al.,4 the authors focused on the variability of lumen segmentation, demonstrating, like the 2015 Challenge, marked inter-team variabilities in aneurysm size and morphology. In an interesting twist, the authors were able to validate that only one team correctly segmented the aneurysm necks, albeit requiring many hours longer than any other team. We understand that the authors now plan to perform their own CFD on the team-contributed segmentations for comparison with the team-contributed CFD solutions. In so doing, they should be able to isolate the impact of segmentation variability from other sources of variability, and in some sense be able to shed light on the “efficacy” vs. “effectiveness” of image-based aneurysm CFD.

The important issue of segmentation variability is similarly addressed in a straightforward UQ study of CFD pressure drop predictions from 10 patient-specific aortic coarctation cases, by Brüning et al.6 Here the authors varied stenosis severity by ± 1 image voxel, as well as patient-specific flow rates by ± 10%, conservatively mimicking key operator uncertainties in the image-based CFD pipeline. The authors report low median but sometimes individually high variabilities in pressure drop, the latter contributing to an uncertainty around the clinical threshold of 20 mmHg in four cases. Nevertheless, and perhaps most importantly, the authors conclude that these uncertainties may not be any worse than those from the clinical gold-standard, namely invasive, catheter-derived pressure drop measurements, thus holding out promise for their non-invasive alternative. All geometries are provided online, to encourage others to use them for their own VVUQ analyses.14

A more sophisticated exploration of uncertainty around a catheter-derived clinical measurement is reported by Fossan et al.18 Here the authors focus on coronary fractional flow reserve (FFR), which measures the functional impairment caused by a stenosis, and can be used to optimally decide treatment. This is the market that Heartflow® is trying to disrupt; however, unlike Heartflow’s 3D models, here the authors reduce their coronary trees to faster 1D–0D models. By first validating (or verifying?) that their reduced-order models reasonably predicted FFR compared to the full 3D problem, the authors were able to perform more extensive and statistically robust UQ and sensitivity analyses than would be practical in 3D. Based on data from 13 patients, and by varying up to 8 input parameters, uncertainty in the predicted FFR was shown to be driven by the parameter controlling the hyperemic response of peripheral resistance and hence blood flow rates, whereas the more obvious culprit, stenosis severity, had only modest influence for cases around the clinical threshold of FFR = 0.8. This led the authors to recommend that improved measurement of coronary blood flow might be the best way to help reduce uncertainty in computational FFR.

Returning to the theme of inter-laboratory comparisons, our special issue includes three papers arising from the FDA’s Critical Path Initiative for CFD. The first paper, by Hariharan et al.,20 focuses on a centrifugal blood pump model developed by the FDA for the ultimate purpose of validating CFD predictions of hemolysis in medical devices. Here the authors report the results of a round-robin comparison of particle imaging velocity (PIV) measurements, since PIV will, ultimately, serve as the reference standard against which CFD velocity predictions must be validated. Three independent laboratories followed a standardized protocol to measure velocity fields at key pump locations, each for six pump flow conditions. Overall, 10% variabilities were reported for mean velocities, although up to 30% at certain locations and/or for peak velocities. These and related FDA validation datasets are provided online,16 and serve as a reminder that, no matter how precisely a CFD model is verified, it can only be validated to a certain precision owing to unavoidable uncertainties in the benchmark experimental measurements themselves.

Also part of the FDA Critical Path Initiative is a two-part study of a patient-averaged inferior vena cava (IVC) model developed for preclinical testing of IVC filters. Part I, by Gallagher et al.,19 describes the model, which accounts for both the curvature and elliptical cross-section of the IVC lumen, and benchmark PIV measurements made under both resting and exercise conditions. Velocity profiles measured on planes through the iliac vein inlets confirm the intended parabolic flow, while the IVC itself demonstrates non-trivial skewing on the coronal plane, and blunting on the sagittal plane, both enhanced at the higher flow rate. Part II, by Craven et al.,12 verifies a corresponding CFD model of the IVC geometry, and validates it against the PIV measurements. The authors also take advantage of the 3D nature of the CFD velocity fields to highlight swirl and mixing, as they are key determinants of the transport of the emboli that IVC filters are designed to catch. Generally excellent agreement with the PIV measurements is demonstrated, albeit with some discrepancies that are shown to be caused by uncertainties in the measured flow rates and in the matching of the PIV and CFD models. The IVC model geometry and benchmark PIV data are provided online.26

A different approach to in vitro validation is presented by Ruesink et al.,25 focusing on the validation of pulse wave velocity (PWV) measurements from 4D Flow MRI, a non-invasive way of characterizing vessel wall stiffness locally. Thin-walled tubes of different stiffnesses, perfused under pulsatile flow, were imaged with a high-speed camera in order to calculate the reference elastic moduli from the wall displacements. Flow rate waveforms, measured at both ends via ultrasonic flow meters, were used to infer the reference PWV. 4D Flow MRI measurements were then used to extract flow rate waveforms along the tube length, from whose progressive transit the PWV was calculated using different popular algorithms. These complex and detailed benchtop experiments lend support to the idea that non-invasive 4D Flow MRI could ultimately become a “one-stop shop” for in vivo measurements of both cardiovascular fluid and structural mechanics, either complementing (or perhaps 1 day replacing) computational models.

4D Flow MRI also features in the study by Boccadifuoco et al.,5 on hemodynamic behaviour of a healthy thoracic aorta. MRI flow rates were used to inform the inlet velocity and outlet resistance boundary conditions of their patient-specific model, while the predictions of fluid–structure interaction (FSI) vs. fluids-only (CFD) simulations were validated against the MRI velocities within. As expected, wall compliance had a non-negligible effect on the hemodynamic outputs, and so the authors then applied a stochastic analysis based on the Polynomial Chaos approach to model the uncertainty to aortic stiffness values ranging from experimental test to pathological conditions. This complex approach allowed obtaining response surfaces of the quantities of interest in the parameter space, starting from just a few deterministic simulations. The idea to save computational time maintaining a good level of prediction is surely an avenue that needs to be pursued.

Also related to the thoracic territory is the work by Campobasso et al.,7 which focuses on FSI modelling of ascending thoracic aortic aneurysms (aTAAs). Verification of their models was performed for mesh size in both fluid and solid domains, as well as for several other numerical settings. Flow eccentricity was validated against 4D Flow MRI with errors ~ 25%, and capturing well the in vivo deviation of velocity away from the aortic centerline and the jet flow impingement against the aortic wall. The authors continued their investigations on the effects of combinations of possible patient-specific factors, showing that patients with increased aTAA stiffening may have higher peak wall stress when associated with high peripheral resistance, and hence may be subject to higher risk of aTAA rupture. In vivo validation is expected to confirm their findings, as the improved prediction of aneurysm rupture risk remains one of the holy grails of cardiovascular modelling.

The paper by Luraghi et al.22 keeps the reader focused on verification, here related to the simulation of an idealized trileaflet heart valve model. The authors analysed the effects of the selection of the element typology, formulation and damping factor on both solid and fluid dynamic quantities, and showed that quadratic shell elements with particular hourglass control provide the best compromise between model complexity and computational efficiency for discretizing the heart valve, while also demonstrating that careful attention must be paid to the damping factor to avoid spurious fluctuations without artificially suppressing real ones. This study also clearly shows that FSI, as opposed to structural analysis alone, is required to properly model the valve deformations, owing to nonuniform dynamic pressures on the valve leaflets. A study like this reminds the reader that proper verification is always necessary before selecting elements and parameters to be used in a simulation.

The paper by Tango et al.27 continues the focus on heart valve dynamics. The first part of the study focuses on validation, wherein an FSI model of the aortic valve and root was created to mimic an in vitro setup with a stent-supported porcine valve that had previously been studied with PIV under physiological operating conditions. The numerical and experimental velocity fields of the sagittal aortic root cross section were compared at different instants of the cardiac cycle, showing good agreement, especially once valve opening has been completed. Subsequently, the authors performed FSI simulations with the stent portion removed, inclusion of aortic root compliance, and adjustment of fluid properties, to model a “healthy” state of the valve. The main message of the authors is that, once validated against available benchmark experimental data, a computational model can be used with more confidence to investigate scenarios that are not subject to limitations and artefacts that might have been unavoidably present in the experimental study.

The last two contributions to our special issue are interrelated by stent behaviour in coronary arteries. Conway11 reviews how the fracture of coronary stents is reported in clinical studies, and how this occurrence is correlated to adverse outcomes such as in-stent restenosis and/or thrombosis. This paper also focuses on the way stent-fracture can be investigated by means of either physical experiments or computational simulations. The author discusses how the fracture of coronary stents appears to be generally under-reported in the literature, as most of the patients are asymptomatic; and about the need for refined testing and properly-designed validation studies to be able to predict stent fracture. Stent fracture is also suggested as a potential cause of in-stent-restenosis, as the stress and the device fracture can alter the hemodynamic state. Nikishova et al.23 tackle a key aspect of this adverse event via detailed UQ of a 2D multiscale model of in-stent restenosis. Their model takes into account different temporal sub-processes including stent-deployment and post-deployment cellular and drug eluting responses, and showed that deployment depth and endothelial regeneration time are the most influential model parameters. This multiscale and multivariable analysis gives a prime example of how UQ allows a deeper insight into the dominant mechanisms (biological, mechanical, transport and fluid dynamics) in such a complex multiphysics problem. Recent progress in the image acquisition in coronary artery, for example with optical coherence tomography, now allows to have a clear and detailed description of a stented segment, in terms of stent strut penetration into the arterial wall and strut malapposition.8 The extension to 3D is a natural pathway for this study together with a deeper understanding of the biological processes involved in the in-stent restenosis process.

We hope that the above-described studies, comprising our special issue, will inspire other cardiovascular modellers to perform their own VVUQ studies, for the benefit of our community, our clinical colleagues, and, of course, the patients whose lives we all ultimately want to make better. We also hope that this special issue will help make CVET the “go-to” journal for this important topic.

figure a

Editors’ meeting, 2018 World Congress of Biomechanics, Dublin.