Advertisement

Pharmacodynamic Evaluation: Cardiovascular Methodologies

  • Ivana I. VranicEmail author
Living reference work entry

Abstract

Cardiovascular methods to assess pharmacodynamics nowadays evolve very quickly, due to rapid progress in high technology and IT sector. Noteworthy, mathematical approach grows very fast in new algorithms to analyze the heart signal. Many areas of multiple organ damage will relay in very complex software and hardware innovations. Basics for this growth is understanding of previously unknown mechanisms of control of physiological functioning like heart stiffness and compliance. Other reasons go to research in Shannon’s entropy and derived calculations. On the other hand, some previous methods have been surpassed like arterial pulse methods when it comes to pharmacodynamics research. It is of importance also to take into account rare diseases and various channelopathies that may interfere with pharmacodynamics evaluation on large-scale clinical trials. In phases III and IV of clinical research, those factors may influence final statistical results. New tests and old proven measures of hemodynamic stabilities are required to evaluate new therapeutics during clinical studies to be able to treat more people on pharmacogenetic basis with pharmacogenomic approach. Safety to treat with new drugs comes into the first place, so many requirements in monitoring of data gathered by contract research organization (CRO) are necessary to get the approval of FDA and European Medicines Agency (EMA) is the European Union’s equivalent to the U.S. Food and Drug Administration (FDA). Those approvals mainly rely on pharmacodynamics data pooled out from clinical drug researches. To be more rapidly accessible, adverse effects are collected via wireless technologies and monitored on wider basis across multicentric studies. Therefore, guidelines on consistent methodology toward new therapeutics approach are adopted constantly.

Introduction

Understanding pharmacodynamics of novel testing drug is essential for its clinical utility especially when it comes to safety for use in human population. Therefore, the International Council for Harmonization of Technical Requirements for Pharmaceuticals for Human Use (ICH) regularly provides updates and guidelines on consistent methodology toward new therapeutics by adopting quality, safety, efficacy, and multidisciplinary guidelines (www.ich.org). ICH is unique in bringing together the regulatory authorities and pharmaceutical industry to discuss scientific and technical aspects of drug registration, and it is discussed by panel of key opinion leaders and adopted worldwide.

Studying pharmacodynamics of novel therapeutic is of importance from the aspect of exposure/response relationships, weighted as pharmacokinetics (PK) relationship to pharmacodynamics (PD) . This PKPD relationship is central to utilizing experimental data to enable right decision-making in drug development put into clinical practice.

Also, studying pharmacodynamics is not the same in pediatric and/or elderly population, therefore yielding adequate study population groups is essential for clinical drug investigation.

Novel approach is pharmacogenetics which refers to heterogeneity in drug response, whereas pharmacogenomics tends to evolve into personal medicine, meaning “the right dose, of the right drug, for the right person.” Therefore, there is a need to target individual patient’s response of those who will benefit the most and suffer the least. Nevertheless, genetic variation in subgroups of patients will create a different response in pharmacodynamics.

Various newly recognized channelopathies , like the ones previously described involving the nervous system (i.e., generalized epilepsy with febrile seizures, familial hemiplegic migraine, episodic ataxia, hyperkalemic and hypokalemic periodic paralysis), the cardiovascular system (i.e., long QT syndrome, short QT syndrome, Brugada syndrome , and catecholaminergic polymorphic ventricular tachycardia), the respiratory system (i.e., cystic fibrosis), the endocrine system (i.e., neonatal diabetes mellitus, familial hyperinsulinemic hypoglycemia, thyrotoxic hypokalemic periodic paralysis, familial hyperaldosteronism), the urinary system (i.e., Bartter syndrome, nephrogenic diabetes insipidus, autosomal-dominant polycystic kidney disease, hypomagnesemia with secondary hypocalcemia), and the immune system (i.e., myasthenia gravis, neuromyelitis optica, Isaac syndrome, and anti-NMDA [N-methyl-D-aspartate] receptor encephalitis); need new pharmacodynamics aspect evaluation of new innovative therapeutics.

Validation of Cardiovascular Test Criteria

Pharmacological drug testing in clinical setting has five phases. Phase 0 refers to newly introduced step in testing micro dose of drug in short period of time 1–3–7 days, to be able to select its suitability to enter Phase I.

Phase I refers to the first introduction of a drug into humans – usually healthy voluntaries up to 20 or 80 subjects.

Phase II investigation consists of controlled clinical trials designed to demonstrate effectiveness and relative safety (drug vs. placebo).

Phase III trials are performed after the drug has been shown to be effective and are intended to gather additional evidence of effectiveness for specific disease indications and more precise definition of drug-related adverse effects and safety. This phase includes both controlled and uncontrolled studies.

Phase IV trials are conducted after the national drug registration authority has approved a drug for distribution or marketing. These trials may include research designed to explore a specific pharmacological effect, to establish the incidence of adverse reactions, or to determine the effects of long-term administration of a drug.

All cardiovascular tests planned by the protocol should be monitored, in sense of validity, objectivity, and repeatability. Usually, there is an outsourcing contract research organization (CRO) who is responsible for quality control of the equipment.

The principal investigator is responsible for minimizing the individual risk and optimizing the ethical benefit of the study to the group. Public sanctioning of the study is done by Ethical Committee . While considering usability of test chosen, it should provide a tangible and measurable result of drug action. As for quality criteria of new drug, it should unequivocally prove its safety in human population.

Empirical Quality Criteria

The usability of a method can be quantified by a formal assessment of empirical quality criteria based on test-theoretical principles:
  • Objectivity : Objective is the extent of investigator independence in conducting the test, analyzing its results, and interpreting its data. It should be done in double-blind fashion with strict standardization and between observer agreement and consistency.

  • Reliability and sensitivity : Standard error is accepted because it reflects physiological variability between subjects and methods used for quantification. A test or method is reliable if it is hardly subject to such variability and yields highly consistent results when repeated, although this does not imply that the results are objective, which is a matter of sensitivity of the test.

  • Pharmacosensitivity and pharmacospecificity : The capacity to detect drug-induced systematic effects, pharmacosensitivity reflects reliability of method selected for quantification, intrasubject repeatability of drug-related changes. Partially those variations belong also to circadian rhythm, postprandial effects. The ability to separate these is a proof of specificity.

  • Economy : If a method can be repeated several times in cost efficient and time efficient way with electronic wireless transfer it is better.

  • Validity : Methods are valid if they measure what they claim or intend to measure.

Assessment of agreement among observers is important in evaluating test objectivity, also agreement on testing the method on different manufacturer’s equipment is essential in this regard.

Issues with Cardiovascular Test Methodology and Measurements Validity

It is important though that measurement equipment be calibrated and standardized as to provide intrasubject repeatability and test objectivity in drug host reaction. In this way also interobserver variability is less present. Correlation and regression analyses are more accurate in drug effectiveness and safety as well.

Clinical Trial Legal Regulations and Good Clinical Practice (GCP)

Clinical trial by definition is any investigation in human subjects intended to discover or verify the clinical, pharmacological, and/or other pharmacodynamic effects of an investigational product(s), and/or to identify any adverse reactions to an investigational product(s), and/or to study absorption, distribution, metabolism, and excretion of an investigational product(s) with the object of ascertaining its safety and/or efficacy. The terms clinical trial and clinical study are synonymous (ICH GCP 1.12).

The regulative that describes standard for the design, conduct, performance, monitoring, auditing, recording, analyses, and reporting of clinical trials that provides assurance that the data and reported results are credible and accurate, and that the rights, integrity, and confidentiality of trial subjects are protected is also very important (ICH E6: GCP).

The need for more efficient approach in clinical trials goes to design, conduct and oversight, recording and reporting as new tools to trace immediate results. It is wireless technology that enables monitoring instantaneously and on long distances.

A document that describes the objective(s), design, methodology, statistical considerations, and organization of a trial is called clinical trial protocol . The protocol usually also gives the purpose and rationale for the trial, but these could be provided in other protocol referenced documents (ICH GCP 1.44). Throughout the ICH GCP Guidance, the term protocol refers to protocol and protocol amendments.

Cardiovascular Tests in Pharmacodynamics

Circadian Blood Pressure

Purpose and Rationale

Arterial blood pressure (ABP) is the pressure derived from circulating blood upon the intrinsic wall of the arterial vessel, also called systemic blood pressure. It is the pressure derived from ejected blood out of the left ventricle during the systole, and pulsatile wave of sequential modulation thereof, during the progression of the pulse wave to the rest of circulation.

Due to the transformation of the pulse wave, the maximum (“systolic” BP [SBP]) and minimum (“diastolic” BP [DBP]) blood pressure are reflections of the central hemodynamics . Since blood pressure declines almost exponentially over the diastole, bradycardia has a direct DBP reducing effect unrelated to central pump function and peripheral vascular resistance; this is often associated with a relatively higher SBP due to a higher preload while longer filling phase.

In BP estimation there are few international guidelines to consult: the eight report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure, American hypertension guideline (JNC 8) and the National Institute for Health and Care Excellence (2011, 2015,2016) Hypertension: clinical management of primary hypertension in adults, British hypertension guideline (NICE) are widely known and accepted international guidelines, although there are some countries with local guidelines (Chiang 2017).

Procedure

Blood pressure is still mostly measured noninvasively according to the principle of Riva-Rocci in 1896, i.e., by inflating a cuff around the upper arm up to arterial compressive occlusion and then slowly deflating the cuff while the pressure in the cuff is measured. Originally, the cuff pressure was measured by means of a mercury sphygmomanometer; most present devices use an aneroid manometer or electronic pressure transducer; nevertheless, blood pressure is still generally reported in millimeters mercury (mmHg).

Measuring pulse signals originating from systolic and diastolic blood pressure can be achieved by palpation, auscultation, or oscillometry, from a suitable vessel site distant from the cuff. Palpatory (systolic) blood pressure is now only used for emergency evaluations; manually operated auscultatory blood pressure with an appropriately adjusted cuff has long been considered to be the method of choice for clinical practice (using aneroid manometers) and clinical trials (using a random zero mercury sphygmomanometer). Blood pressure is now mostly measured by means of oscillometric devices with automated inflation and deflation; the increasingly frequent use of the automated devices has the implication that many clinicians and nurses are no longer sufficiently well acquainted with the manual auscultatory methods, which rely on cautious highly observer-dependent auscultation of the Korotkoff-I sound (first appearance of a clear tapping sound that gradually increases in intensity) for SBP and the Korotkoff-IV (sound muffling [DBPKIV]) or Korotkoff-V auscultatory criterion (sound disappearance [DBPKV]) for DBP. On the other hand, modern technology has introduced newer robust devices that can be self-operated by patients and trial subjects also with devices that measure BP from the wrist.

Home BP monitoring (HBPM) has been heralded as a useful and reliable measure of BP (Chia et al. 2017). Rather than replacing clinic BP measurements and ambulatory BP monitoring (ABPM) , HBPM is a complementary tool that helps to eliminate the white-coat effect and identify masked hypertension. HBPM can also improve patient awareness and treatment adherence by giving all patients with hypertension and their family members a more active role in the management of the disease (Shrout et al. 2017).

Evaluation

The shift from manual (mercury-based) auscultatory to automated oscillometric methods has been the subject of controversy, which has only partly been resolved by imposing standardized (cross-) validation procedures.

Automated systems are highly robust and economic since they do not rely on an experienced analyst. This may result in less well-standardized conditions of measurement by lack of experience and discipline (choice of cuff, position of cuff, position of the microphone or oscillometric sensor, inflation speed, deflation speed, adjusted deflation speed when the pulse rate is low or irregular, posture of the patient, resting time, etc.) (Wood et al. 2017).

On the other hand, this makes measurements made primarily for safety surveillance more reliable and useful also for further (efficacy-based or pharmacodynamic) assessments (Butlin and Quasem 2017).

Critical Assessment of the Method

Only invasive blood pressure monitoring (in intensive care unit (ICU)) (Zhou et al. 2017) is superior to noninvasive Riva-Rocci principle (previously discussed). Discrepancies between the two methods can be equal to or less than 5%, provided that all instructions in technical manual are addressed properly.

Ambulatory Blood Pressure Monitoring (ABPM)

Purpose and Rationale

Single time pressure measurement is only a snapshot of the blood pressure pattern, whereas there are normal fluctuations according to activities and circadian rhythm during the day and night. Single time pressure measurement fails to detect blood pressure fluctuations due to autonomic modulation, physical and emotional stress, and the modification thereof by therapeutic intervention (Mc Kinstry et al. 2015).

ABPM typically provides the following three types of information: an estimate of mean BP level, the diurnal rhythm of BP , and BP variability (Thomas et al. 2006).

This seemingly perfect measurement for night blood pressure (NBP) , however, has its own vital limitations. First, all currently available ambulatory BP monitors produce sonorous stimuli, which have been found to disturb sleep significantly in a substantial proportion of patients. Furthermore, the correlation between NBP derived from ABPM and target organ damage tends to be weaker, with the lowest sleep quality mainly resulting from the repeated cuff inflations during overnight BP monitoring. Second, ABPM is not commonly employed in routine clinical practice for evaluating NBP, mainly because of its high cost and inconvenience in performing multiple NBP measurements.

Similarly, the appropriateness and efficacy of antihypertensive interventions should take ambulatory blood pressure measurement/monitoring (ABPM) data into account also with regard to their chronobiological fluctuations (Lee et al. 2015). Indeed, numerous larger-scale outcome studies have shown that ambulatory blood pressure measurement/monitoring (ABPM) yields better predictors of cardiovascular events when compared to timed manual BP readings in the physician’s office or at home, even when the latter are taken carefully and in strict adherence with pertinent guidelines.

Procedure

The first device for noninvasive ambulatory BP (ABP) monitoring (ABPM) was developed in 1962 and subsequently modified by Sokolow et al. in 1966 (Sokolow et al. 1966). It used a microphone taped over the brachial artery, a cuff inflated by the patient, and a magnetic tape recorder for storing cuff pressure. Presently, most devices are automated and rely on the oscillometric analysis of the vascular sound.

A large variety of devices for ambulatory measurement are available. These devices generally provide for both event-triggered and automated oscillometric measurements according to a present protocol of regular intervals (that may be set differently for the day and night measurements). It is important to use a device that has been validated independently, for instance, according to the protocol of the British Hypertension Society (O’Brien et al. 1993) or that of the US Association for the Advancement of Medical Instrumentation (Association for the Advancement of Medical Instrumentation 1993) or both.

Evaluation

The interpretation of ABPM data should be based on standardized criteria (O’Brien et al. 2000; Verdecchia et al. 2004).

An average daytime ABP <135 mmHg systolic and 85 mmHg diastolic is generally considered normal for adults; levels <130/80 mmHg may be considered optimal. Subjects with daytime systolic average ABP values <130 mmHg can be considered to be at only minimal cardiovascular risk even if the reading in the physician’s office was higher (exclusion of white-coat hypertension).

In hypertension management, it is important to analyze both day- and nighttime readings, although the latter may have to be set at broader intervals in order not to disturb sleeping rest. The day–night time fluctuations are generally used to calculate the BP-dip (= (1 (SBPsleeping/SBPdaytime)) 100), with categories such as nondipper (0–10%), dipper (10–20%), extreme dipper (>20%), and reverse dipper (<0%) (O’Brien et al. 1988). In healthy individuals, NBP decreases by 10–20% and increases promptly on waking. However, certain abnormal diurnal variation patterns have been described in which the nocturnal fall of BP may be more than 20% (extreme dippers), <10% (nondippers), or even reversed (reverse dippers). Nevertheless, in recent years, evidence has shown that nocturnal BP levels rather than circadian BP pattern are more accurate in predicting mortality and morbidity related to BP, independent of mean BP and daytime BP levels.

Further important criteria are the overall BP variability and early morning surges, and the pulse pressure (SBP – DBP). Since most systems also report pulse rate, ABPM data can also be used to assess pulse rate variability.

Critical Assessment of the Method

Serial single time BP measurements should be needed to provide complete and accurate diagnostics of hypertension and the need for treatment thereof. For sure, some specific hours should be outlined and sought for in the evaluation of the efficacy of antihypertensive medication. For example, exact time measurement, same apparatus, same personal, before drug intake, after 1 h of waking and urination and before breakfast, also after period of fasting for 6 h at least. Two measurements with pause of 1 min should be taken and mean values considered into account. While addressing cardiovascular safety of noncardiovascular therapeutic, we ought to take diurnal fluctuations as well. Finally, ABPM seems to be very important method in the clinical pharmacological evaluation of cardiovascular effects especially during the night time period and because some people are dippers (hypotensive postural orthostatic tachycardia syndrome (Shibao et al. 2013) and/or Bradbury-Eggleston syndrome) and nondippers (nocturnal hypertension).

Electrocardiography

Standard 12 lead ECG is the good old standard test, well known for its unsurpassed diagnostic capacity when it comes to accurateness, availability, and cost-benefit, as well as multiple feasible repeats.

Purpose and Rationale

The electrocardiogram (ECG, also EKG) is the main noninvasive method among other less frequently used in everyday clinical practice. Back in 1901 Einthoven was first to recognize heart’s electroactivity using string galvanometer and was the first to construct forerunner of contemporary ECG machine. He also named P wave , QRS complex, and T wave some time previously in 1895. The Nobel Prize for this achievement was awarded to him in 1924. Needless to say how important his discovery was for the sake of the whole Medicine! Those tiny superficial signals from transthoracic spread of the electrical activity of the heart were to be gathered during systole (contraction phase) and diastole (relaxation phase). What came later was of even bigger importance: that ECG analysis provided information on frequency and origin (sinus rhythm or other), presence of premature atrial or ventricular beats (ectopism), nature of intra-atrial conduction (existence of block), sorting-out atrial depolarization and repolarization timeline, assessing atrioventricular conduction properties (in search of concealed pathway and/or block), or to analyze intraventricular and transventricular conduction (search for accessory pathway or block ) and ventricular depolarization and repolarization phase (hypoxic or hypothermal injury).

Also, information about cardiac memory that is new and trendy could be very helpful in analyzing drug effects in clinical trials. Noteworthy, many rare diseases , which affect not more than 5 per 10,000 persons in the European Union and encompass between 6000 and 8000 different entities (which affect more than 30 million people just in the EU), have its own ECG varieties and presentations still waiting to be addressed and fully recognized.

Procedure

Still relying on the pioneer work of Willem Einthoven, modern ECG diagnostics now involve digital recording, analysis, and archiving of the ECG tracings and related data (for instance, P-wave duration, PQ-interval , QRS duration, QT-interval , P-wave, QRS-wave, and T-wave vector amplitude and angle). The highest precision is achieved by recording the signals from the bipolar Einthoven leads (I, II, III) , amplified unipolar Goldberger leads (aVR, aVL, aVF) , and unipolar precordial Wilson leads (V1–V6) simultaneously for a sufficiently long time (10 s at least) and at a sufficiently high writing speed (25–50 mm/s).

Standard ECG recording is done with patient resting (12 leads recorded simultaneously, 25 mm/sec paper speed, 10 mm/mV gain, and filter band settings from 0.05 Hz to 150 Hz).

Evaluation

Modern electrocardiography is no longer confined to the “reading” of ECG tracings recorded on paper and the measurement of relevant time sections (intervals, segments, durations) and amplitudes by means of an ECG ruler. It now usually consists of a sequence of finely tuned electronic data processing steps: capturing the ECG lead signals; obtaining a digital representation of each recorded ECG channel by analog–digital conversion and a special data acquisition software or a digital signal processing chip ; processing the resulting digital signal by a series of specialized algorithms, which first condition the signal by removing noise , base-level variation , etc.; mathematical analysis of the clean signals to identify and measure selected time segments and amplitudes (features) for interpretation and diagnosis; secondary processing such as Fourier analysis and wavelet transform decomposition with vector feature extraction to provide input to pattern recognition-based programs; logical processing and pattern recognition, using rule-based expert systems, probabilistic Bayesian analysis or fuzzy logics algorithms , cluster analysis , artificial neural networks , genetic or evolutionary optimization algorithms, and other techniques to derive conclusions, interpretation, and diagnosis; reporting of the tracings, the data, and the conclusions drawn from the analysis with a proper sourcing of the information and the analysis steps.

ECG in diagnosing of specific conditions like, for example, acute myocardial infarction (STEMI or non-STEMI), myocardial ischemia, arrhythmia, pericarditis, etc. Brugada syndrome , J point , Osborn wave belong to ST segment evaluation. Apart from these, ECG can indicate electrolyte disturbances such as hyperkalemia, hypokalemia, hypercalcemia, hypocalcemia, arrhythmogenic right ventricular cardiomyopathy (condition known for sudden cardiac death (SCD) in young individuals); WPW syndrome can be diagnosed noninvasively by ECG only. The available ECG algorithms differ in their complexity and accuracy, and recent analyses have shown lower accuracy rates than those initially reported by the original authors. In clinical practice, a few electrocardiographic features could be helpful to predict the accessory pathway (AP) site and are summarized in the following table:
 

Right-sided AP

Left-sided AP

 

Precordial transition (R/S > 1) is ≥ V2a

R/S in V1 > 0.5 (early transition)a

 

Shorter P-Delta interval (P on delta sign)

Increased preexcitation degreeb

Longer P-Delta interval

Decreased preexcitation degreeb

Free wall versus

Late R/S transition (> V3)

Negative Delta in I, aVL

Septal location

Early R/S transition (≤ V3)

Positive Delta in I, aVL

Superior versus

Dominant positive Delta in II, III, aVF

Inferior location

Dominant negative Delta in II, III, aVF

QS pattern in V1 for a supero-septal location is suggestive of a Para-Hisian AP

QS pattern in II for an infero-septal location is suggestive of sub-Epicardiac APc

aMaybe be inaccurate with very subtle or minimal preexcitation

bMay vary depending on properties of the intrinsic normal AV conduction system

cCoronary sinus branch or diverticulum

These ECG algorithms and criteria may be limited or inaccurate in some subgroups of patients and specific AP types including: (1) pediatric and congenital heart disease ; (2) abnormal patient stature or extreme heart rotation; (3) minimal preexcitation degree; (4) multiple APs; (5) slowly conducting APs (Mahaim family).

Furthermore, it is important to emphasize that ECG prediction of the AP site considers only the ventricular insertion site of the AP fiber. Consequently, this site might not always coincide with the successful ablation site since that many of APs may have some degree of oblique course. Finally, though 12-lead ECG is a valuable noninvasive tool to predict the AP site in WPW patients, intracardiac mapping during the electrophysiologic study and successful catheter ablation remain the gold standard for AP localization.

Critical Assessment of the Method

Although the basic principles of electrocardiography are well known, there is an obvious need for standardization. However, such information usually relates to the conventional recording and interpretation of the ECG signals. In contrast, there is little guidance with regard to the complex electronic data processing that is now inherent to state-of-the-art electrocardiography.

Most ECG devices also print a single- or multichannel signal record on paper. Only such (signed) hardcopy record may be accepted as reliable source documentation. However, caution is indicated since many devices use thermopaper, which generally rapidly fades. The date/time stamp of such devices is usually not reliable since it can be easily accessed by the operator and/or is not automatically synchronized with a reliable time server.

Also, most modern ECG devices provide for an automated analysis of relevant ECG intervals (RR, PQ, QRS, QT) usually based on the averaged signals of a 10-s recording. Such analyses are often judged to be less reliable. This prejudice is unjustified in healthy subjects with mostly normal ECGs: there is generally good agreement between automated and manual analyses; possibly gross differences between automated and manual analyses in healthy subjects mostly relate to either artificial or electrophysiological signal distortions (such as U-waves) that can be easily identified if the tracings are appropriately reviewed by an experienced analyst.

Most analyses, whether automated or manual, are subject to the constraint that it may prove difficult to identify the start of the Q-wave; for this reason, the atrioventricular conduction interval is often reported as PR- instead of the PQ-interval ; the PR-interval does not extend from the start of the P-wave to the R-peak, but to the intersection of the iso-electricity (“zero”) line with the upstroke of the R-wave. The “PR”-interval thus represents a simplification of the “PQ”-interval whenever the start of the Q-wave is not expressed or cannot be measured reliably. This simplification is highly convenient since it is far more easily standardized and/or automated. It is noteworthy that such a simplification is not also generally adopted for the QT-interval : the QT-interval represents the sum of the ventricular depolarization and repolarization, of which the former is relatively constant, less subject to drug effects, and less likely to be of arrhythmogenic relevance. The measurement of the QT-interval relies on two fiduciary points: the start of the Q-wave and the “end” of the T-wave; both are not sharply expressed; the precision of the estimated repolarization duration could be improved by measuring the “RT”-interval, i.e., from the peak of the R-wave to the end of the T-wave; the former fiduciary point is more easily detected, standardized, and/or automated. Automated ECG analysis usually also reports a clinical “diagnosis” of the condition reflected by the ECG based on the rhythmicity and contour of the ECG cycles using either medical or stochastic algorithms. The ECG contour is stereotypic, and deviations from a “normal” morphology may indeed reflect a more or less specific anomaly of cardiac rhythmicity and ectopism ; sinus node pacemaker autonomy and function; intra-atrial, atrioventricular, intra- and transventricular signal spread; myocardial mass; myocardial depolarization and repolarization; myocardial energy balance; etc. Nevertheless, no automated diagnosis should be accepted unless reviewed, confirmed, and/or amended by an experienced electrocardiographist.

Relevant electrocardiographic time intervals and signal durations are affected by heart rate (HR) variations: the AV-nodal conduction time and the PQ/PR-interval shorten with increasing heart rate and this fluctuation may be used as an index of autonomic function. The HR-dependency of the QT-interval is well known and has resulted in several approaches to “correct” the QT-interval for HR below or above 60 bpm, according to Bazett, Fridericia, Framingham’s regression; however, these corrections apply a population mean correction factor for all subjects while there is convincing evidence for significant interindividual variability in the HR–QT relationship implying that the best HR correction for QT should be estimated for each individual. This is hardly feasible since it requires a number of “normal” QT measurements at varying HR for each subject; normograms have been proposed to solve this problem. However, rather than to “correct” for HR-variations, there might be interest in investigating the disparity of the RR–QT relationship as a more sensitive index of arrhythmogenic risk. Data from the International Long QT Syndrome Registry indicate that the probabilistic risk of developing malignant arrhythmias in patients with QT prolongation is exponentially related to the length of the QTc interval, but it remains unclear whether a QT-prolongation predominantly related to HR (i.e., with normal QTc) would be without risk.

The time course of experimental ECG criteria reflects time effects both related and unrelated to the investigational medication. Assuming an additive response model, this results in two important steps in the management of such data: to consider the data both untransformed (U) and as arithmetic changes (D) from predose baseline and to match the courses of these U- and D-data for the time course of the respective criteria during a medication-free control day (“time-matching”). Such time-matching using an extra control day within each treatment (placebo, therapeutic dose, supratherapeutic dose, active control) is costly and the need thereof is controversial.

In the setting of the ICH E14-Guideline, an investigational medication is accepted to be without QT/QTc-effect if the upper bound of the one-sided 95% confidence interval for the largest time-matched mean effect (i.e., of the changes from predose baseline relative to placebo) of the drug on the QTc interval excludes (i.e., is smaller than) 10 ms; the study is normally conducted in healthy volunteers investigating both a therapeutic and a (widely) supratherapeutic dose relative to a positive (active) and a negative (placebo) control in an experimental setting stringently powered to exclude an effect on the QTc interval exceeding 5–10 ms. This has been subject to extensive critique also because of well-founded biostatistical concerns and since a possible effect compartmentalization is not accounted for. When the largest time-matched difference exceeds this threshold, the study is termed “positive.” A positive study does not imply that the drug is pro-arrhythmic but influences the evaluations that need to be carried out during the further stages of drug development.

Most ECG systems operate as closed “black-boxes” with device-specific file formats and often nonpublic analysis algorithms. There have been several efforts to develop unified, platform- and device-independent solutions.

Vectorcardiography

Purpose and Rationale

The single equivalent dipole theory that has been developed in terms of body surface potentials, the electric field produced by heart muscle, can be represented at any instant by a single equivalent dipole , and this in turn by a mean instantaneous spatial vector ; and the voltage registered in any given lead is directly proportional to the projection of the instantaneous vector on the axis of the lead but is inversely proportional to the cube of the distance between the dipole and lead electrode. The vector concept and its implications were known to Einthoven and his contemporaries as well.

Procedure

Exactly as standard ECG procedure with 12 lead system electrodes. The reference frame used to indicate descriptively the orientation of the vector loops in a body is that recommended by American Heart Association Committee as frontal, transverse (horizontal), and sagittal plane vectorcardiograms.

Evaluation

The spatial vectorcardiogram (VCG) (Fig. 1) may be considered to originate at the center of an imaginary cube and as projected to the sides of the cube to represent the frontal, transverse, and sagittal planes.
Fig. 1

VCG plot showing vector loops for P, QRS, and T wave (Reprinted from Yang H, et al. Spatiotemporal representation of cardiac vectorcardiogram (VCG) signals. Biomed Eng Online. 2012; 11:16 under license to BioMed Central Ltd. and CC BY License)

Critical Assessment of the Method

Analysis of the VCG should be done in a systematic manner if one is to obtain consistent interpretations. Just as the routine interpretation of ECG tracing, vectorcardiographic analysis requires observation of the P, QRS, and T loop voltage and direction of inscription. The conventional planar VCG records the projected image of the cardiac spatial vector on the frontal, transverse, and sagittal plane. Vectorcardiography is superior to ECG in some situations where acute myocardial infarction (AMI) could be overseen due to existence of left bundle branch block ; ventricular hypertrophy of left or right origin, atrial hypertrophy , some congenital heart disease and valvular heart diseases ; also some pulmonary states: cor pulmonale , emphysema, and chronic obstructive pulmonary disease (COPD) .

Signal Averaged ECG: Late Potentials

Purpose and Rationale

The high-resolution electrocardiogram (ECG) is defined as a body surface electrocardiographic recording that registers cardiac events not seen in the standard ECG. This is usually done by increasing both the time and the voltage scales of the recording instrumentation. However, as the ECG signal is amplified, there are sources of noise that can obscure very small cardiac signals. There are several possible sources of interfering noise, but the most significant of these noise signals are the electromyographic (EMG) signals from the skeletal muscles.

Computer-based methods can be used to decrease the effects of interfering noise signals. The most common method is known as signal averaging. Hence, the term signal-averaged ECG or SAECG is often used interchangeably with the term high-resolution ECG.

Procedure

There have been several reviews of SAECG in the literature. In addition, there was a combined American Heart Association, American College of Cardiology, and European Society of Cardiology Task Force SAECG report that was published in each group’s respective journal. A more recent report by an expert committee of the American College of Cardiology also provides guidelines for clinical use. A technical information report from American Association for the Advancement of Medical Instrumentation was published in 1998 that specifies the technical characteristics of SAECG systems.

Evaluation

SAECG acquisition and analysis are based on the recording of three leads in an anatomically orthogonal configuration and are referred to as XYZ leads , similar to the coordinate axes used in geometry. The most notable of these systems is the Frank lead system , which uses a resistor weighing network and an extra lead position to form its XYZ lead set. Usually 250 accepted sinus beats (QRS complexes ) are analyzed, but this can be modified as lesser or greater values. Noise should be between 0 μV and 0.3 μV for 200–300 beats acquisition analysis.

Critical Assessment of the Method

There are three parameters derived from vector magnitude after filter-processed noise is accepted for evaluation. QRS duration as the first one is frequently referred as fQRS or QRSd . It is the distance between QRS onset and QRS offset as measured on timescale in milliseconds (ms). Abnormal QRS duration lies between 110 ms and 120 ms. The other two parameters: Root Mean Square voltage (RMS) and Low-Amplitude Signal duration (LAS) rely primarily on the QRS offset. They both are obtained from the filtered vector magnitude. The focus of those two parameters is waveform of late potentials as a low-level “tail” adjunct to final oscillation of QRS (QRS offset). The threshold of RMS for abnormal values is less or equal to 20 mV, whereas for the LAS values greater than or equal to 20 ms are considered abnormal. If any of the mentioned parameters is abnormal, SAECG is considered positive (abnormal).

The QRS duration is a measure of total ventricular activation time (VAT) . VAT is the time from the earliest ventricular activation to the time of latest ventricular activation. Multiple times magnified, in high-resolution mode it delineates the termination of the low-level late potentials.

The Root Mean Square voltage (RMS) is usually seen as shaded region on RMS time function depicting last 40 ms from QRS offset . The RMS voltage of this value is 20 mV. Thus, RMS represents voltage based on time duration after final oscillation of QRS.

The Low-Amplitude Signal Duration (LAS) is a duration based on a voltage measurement at the end of QRS complex . A 40 mV voltage is the most commonly used reference point.

Role in Pharmacodynamics

The innovative therapeutic could impose different effects in human, among which late potentials can be successfully used to identify its capacity to induce proarrhythmic effects in pathophysiological millieu. This is of potential vital interest as for establishing dose-dependent liaison or possible drug interaction contraindications.

Heart Rate Variability (HRV) 5′ Test and 24 and/or 48 Hour

Quantifying the amount of autonomic nervous system (ANS) activity in the human body provides an insight of disease severity in a vast scale of diseases. Heart rate variability (HRV) is calculated from either short-term or 24-h electrocardiograms being an ideal way to predict ANS activity, while giving the open sight into pharmacodynamics of tested drug.

Purpose and Rationale

HRV analysis is based on the RR interval time series, the sequence of intervals between successive fiducial points of R peaks of QRS complexes in the electrocardiogram. Noteworthy, RR intervals are not equally sampled continuous signals, but rather event series on timeline. There are numerous methods and approaches for time-series analysis , some of them being linear and nonlinear.

Procedure

The duration of the recording depends on method used for HRV analysis, but usually 24 h or 48 h time series are adequate for analysis. Also, the aim of the study is detrimental of time period needed as well as stationary issues. Frequency domain methods (Fig. 2) are of short duration of 5–20 min. Typically, nonlinear methods are preferred for short-term measurements (Fig. 3). Electrodes are placed as for standard ECG recording, but special software is available for signal analysis. Patient is supine and then abruptly sitting up. Alternatively, patient is supine, sitting, standing, and squatting for 5 min each, while HRV is being analyzed. For the time domain analysis, it usually takes 48 h as obligatory sample.
Fig. 2

RR intervals and Poincaré plots during autonomic perturbations. RR interval time series for single subject from all four phases of study with corresponding Poincaré plot (Reprinted from Karmakar CK, et al. Sensitivity of temporal heart rate variability in Poincaré plot to changes in parasympatheticnervous system activity. Biomed Eng Online. 2011; 10:17 under license to BioMed Central Ltd. and CC BY License)

Fig. 3

Non-stationary RR interval (RRi) series during a maximal effort exercise (a) and the resulting scaled colors map (b) and surface plot (c). PSD power spectral density (Reprinted from Bartels R, et al. SinusCor: an advanced tool for heart rate variability analysis. Biomed Eng Online. 2017; 16(1):110 under license to BioMed Central Ltd. and CC BY License)

Evaluation

In practice high frequency (HF) component domain can be used as a measure of parasympathetic tone and vagal activity, only if respiration is not forced or withheld. Low frequency (LF) component domain is considered to represent sympathetic measure. Power spectral density (PSD) (Fig. 4a, b) decomposes the signal into LF and HF, by two commonly used methods: Fast Fourier transformation (FFT) and autoregressive modeling (AR) . Many other mathematical approximates and calculations can be achieved through this, but it is out of the scope of this chapter.
Fig. 4

(a) Heart rate in representative subject with Normal; (a) Phase space plot (b) Scalogram (c) Wigner-Ville distribution (Reprinted from Faust O, et al. Analysis of cardiac signals using spatial filling index and time-frequency domain. Biomed Eng Online. 2004; 3(1):30 under license to BioMed Central Ltd. and CC BY License). (b) Heart rate in representative subject with AF; (a) Phase space plot (b) Scalogram (c) Wigner-Ville distribution (Reprinted from Faust O, et al. Analysis of cardiac signals using spatial filling index and time-frequency domain. Biomed Eng Online. 2004; 3(1):30 under license to BioMed Central Ltd. and CC BY License)

Critical Assessment of the Method

The area under the power spectral density (PSD) curve is divided into three frequency bands: HF, LF and VLF (very low frequency) . These parameters depend greatly on the equipment stability and surrounding noise. Since some of the manual filtering of derived signal can modify the outcome results, it may influence true positive and true negative test outputs .

Holter Monitoring 24 Hour ECG

Purpose and Rationale

Since the invention of the continuous ECG monitor in 1961 by Norman J. Holter, the methodologies and applications of continuous recording of the ECG have evolved tremendously. The pioneering work of Bruce Del Mar led to the first commercially available continuous ECG in 1962, and the methodologies have become refined to the degree that the devices now are very light and use solid-state memory to record up to a week’s worth of continuous ECGs. The original Holter monitors were primarily used to detect disturbances in the cardiac rhythm, but early studies investigated the presence and significance of ST-segment depression. There are three categories of ambulatory ECG monitors:
  1. 1.
    Continuous monitors store the heart’s electrical signals for the entire time the patient wears the device. Continuous monitors have two types:
    1. (a)

      Short term, known as 24-h or 48-h Holter monitors.

       
    2. (b)

      Long term, which can record for more than 48 h. In recent years, new technology has allowed ambulatory ECG monitors to have more memory while still being small and lightweight; these are known as efficient-memory Holter monitors and patch monitors (designed without the wires connecting electrodes to the recorder).

       
     
  2. 2.
    Intermittent long-term monitors store the heart’s electrical signals only when the monitor is triggered by a patient or by abnormal heart rhythm. These monitors also have two types:
    1. (a)

      Event monitors, also known as post-event recorders, which typically store 5–7 min worth of data from the moment triggered.

       
    2. (b)

      Cardiac loop recorders , which continuously record new signals, erase old signals and lock in data when triggered. They typically store 1–4 min worth of data. Loop recorders can be either external, worn around the waist or wrist, or insertable (also known as implantable), implanted under the skin in the left parasternal region (near the heart).

       
     
  3. 3.

    Real-time cardiac telemetry systems, also known as mobile cardiac outpatient telemetry , are similar to long-term continuous monitors but can send the data directly to a central monitoring station instead of recording it to be downloaded later.

     

Procedure

In recent years, innovative engineering and advances in manufacturing have hastened the development of miniaturized medical devices and yielded a variety of cardiac monitors for ambulatory use (Fung et al. 2015). These recently developed wearable, “on-body” ambulatory devices have integrated microelectronics for short- to medium-term (days to weeks) monitoring and are challenging conventional, widely used devices from the last decades that were limited to wearable multi-lead 24−/48-h Holter monitors and event recorders. Further on the pioneering front, very short-term (seconds to minutes) handheld smartphone-enabled systems are beginning to reshape the field of mobile cardiac monitors as well as the clinician-patient interface. These systems require attachment of an electrode-embedded module to a smartphone that detects electrical impulses from the user’s fingertips and transmits signals to the mobile device to generate continuous single-channel ECG for the duration of the contact between the fingers and the sensor. Adhesive Ambulatory ECG (AECG) patch devices typically comprise a sensor system, a microelectronic circuit with recorder and memory storage, and an internal battery embedded in a relatively flexible synthetic matrix, resin, or other material. They are usually intended for medium-term use ranging from days to several weeks, depending on the device. The self-contained adherent unit typically has a low profile and can be affixed to the body surface, usually over the left upper chest area, by means of prefabricated adhesive material.

The main advantages of this kind of AECG system are that they are easy to use, leadless, minimally intrusive to daily activities, water-resistant, hygienic (i.e., single use only), and incur no upfront cost to the clinic for the initial device investment as compared to the wearable, reusable devices (Gulizia et al. 2016). Because of easy application of the adhesive AECG patch to skin and its unobtrusive maintenance-free nature, they have a high study completion rate, implying a high acceptance rate (long wear time) that should translate into improved compliance compared to other short- to medium-term devices such as the Holter monitor.

Evaluation

Guidance has been provided for the continuous ECG monitoring in several clinical settings.

Originally, Holter-ECG analysis was mainly focused on rhythmicity (sinoatrial dysfunction, ectopism , atrial fibrillation , atrial flutter, paroxysmal tachycardia, accelerated rhythms with normal or aberrant configuration), atrioventricular conduction delays and blockade, intermittent changes in QRS-morphology (parasystoles, ectopic rhythms), etc. Improved algorithms also now provide for the analysis of changes of the QT-wave and ST-Twave.

Several electrocardiographic-based methods of risk stratification of sudden cardiac arrest have been studied, including QT prolongation , QRS duration, fragmented QRS complexes , early repolarization, Holter monitoring, heart rate variability, heart rate turbulence, signal-averaged ECG, T wave alternans , and T-peak to T-end (Garcia et al. 2011). These ECG findings have shown variable effectiveness as screening tools (Verrier and Ikeda 2013).

Critical Assessment of the Method

One of the important advantages of continuous ECG recordings is that it collects a vast amount of data under real-life conditions; this permits beat-to-beat analysis for a far more precise and valid interpretation of the QT-related arrhythmogenic risk (Luebbert et al. 2016). Indeed, dynamic beat-to-beat QT interval analysis compares the QT interval to individual cardiac cycles from all normal autonomic states at similar RR intervals, thus eliminating the need for correction functions; in this way, beats with QT intervals exceeding a critical (subject-specific) limit can be flagged as outlier beats for further arrhythmia vulnerability assessment (Coris et al. 2016). Furthermore, such beat-to-beat techniques can also be used to assess the QT–TQ interval relationship known as ECG restitution (Olsen et al. 2015).

Further procedures allow evaluation of highly sensitive prognostic criteria, such as QT-dispersion , heart rate variability , and heart rate turbulence; other methods specifically conceived to quantify arrhythmogenic risk are under development (Abdelghani et al. 2016).

Additionally, continuous ambulatory electrocardiography provides for a better characterization of the diurnal variability and the implications thereof for the timing of drug administration.

Turbulence Onset 24 Hour Holter Monitoring ECG

Purpose and Rationale

Heart rate turbulence (HRT) is a phenomenon that was discovered by Georg Schmidt’s group in mid-1990s in Munich. HRT is defined by minute changes in ventricular cycle length following premature ventricular contractions (PVC) (Watanabe 2003). After a premature ventricular contraction, the normal response is a brief initial increase in heart rate, followed by a return to baseline. These changes are the result of premature ventricular contraction–induced hemodynamic disturbances, and the speed at which they happen ultimately provides information regarding cardiovascular autonomic function. Clinical investigations showed that patients in whom this fluctuation of sinus rhythm was absent showed a higher mortality rate. Heart rate turbulence is qualified by two parameters: turbulence onset and turbulence slope (Fig. 5). Turbulence onset is the relative change in the RR interval caused by a premature ventricular contraction, and turbulence slope is the rate of change of the RR interval back to baseline. Heart rate turbulence can also be induced through the use of intracardiac pacing performed in the electrophysiology laboratory or through an implanted pacemaker or an ICD. One contemporary protocol for measuring induced heart rate turbulence involves computing turbulence slope and turbulence onset following 10 ventricular extrastimuli with a coupling interval of 60–70% of the sinus cycle length.
Fig. 5

Example of a RR interval (RRi) series during rest period, detrended with a 5 degree polynomial (upper panel) and the corresponding power spectral density (PSD) function estimated with Welch’s method (lower panel) (Reprinted from Bartels R, et al. SinusCor: an advanced tool for heart rate variability analysis. Biomed Eng Online. 2017; 16(1):110 under license to BioMed Central Ltd. and CC BY License)

Procedure

In 2008, the International Society for Holter and Noninvasive Electrocardiography (ISHNE) published a consensus statement on the standards of measurement, mechanism, and clinical applications.

Evaluation

Heart rate is influenced by myriad of intrinsic oscillations due to change in posture or activity or mental state or stress (Huikuri et al. 2009). Therefore, a plot of RR interval has a jagged stochastic appearance (tachogram) . Computation of HRT uses PVC as anchor point. The PVC tachogram sequence should include two sinus rhythm RR intervals before the PVC, the coupling interval and compensatory pause and 15 subsequent sinus RR intervals. All intervals not having compensatory pause or contaminated by PVC are excluded from the equation:
$$ \mathrm{TO}=\frac{\left({\mathrm{RR}}_1+{\mathrm{RR}}_2\right)-\left({\mathrm{RR}}_{-1}+{\mathrm{RR}}_{-2}\right)}{\left({\mathrm{RR}}_{-1}+{\mathrm{RR}}_{-2}\right)}\times 100 $$

Critical Assessment of the Method

Broadly speaking, HRT predictive capabilities of mortality, cardiac mortality, and also arrhythmic mortality rank with, or even exceed in some occasions, conventional linear HRV (Huikuri and Stein 2013). Limitations concern patients with atrial fibrillation , where this analysis is not feasible.

Symbolic Dynamic Analysis: Theory of Chaos

Purpose and Rationale

The analysis of the symbolic dynamics of the heart rate describes the nonlinear features of HRV. In this technique, the RR intervals are named by different symbols based on the length of the RR intervals. For shorter electrocardiographic recordings, for example, four different symbols can be used, and for longer 24-h recordings, the number of the symbols can be increased, e.g., up to six. After the definition of symbols (alphabets), words, which are from three or four successive alphabets in length and start from each successive beat, are formed. The complexity of the data time series is determined from the distribution of the words using appropriate mathematical methods (Voss et al. 1996).

Procedure

The conversion of a time series into a symbol string may be done using several methods (Fig. 6). The first one divides symbol into two or more value ranges, depending on how many symbols we wish to utilize. Value ranges can be absolute bands or based on signal averages or standard deviation (SD), for example, A, B, C, D, and sequence like: ABCCDABAACBDACDCCBDABBADDCACC. The shape of distribution may itself act as a basis of further analysis, but it is also possible to measure the order related to the distribution in the terms of entropy. The simplest such measure is Shannon’s entropy.
Fig. 6

Concept flow chart of the AIIA method (Reprinted from Huang YC, et al. Using n-gram analysis to cluster heartbeat signals. BMC Med Inform Decis Mak. 2012; 12:64 under license to BioMed Central Ltd. and CC BY License)

Evaluation

This method of symbolic dynamics is a useful approach for classifying the dynamics of HRV. By means of this method, the inner motions of the time series can be investigated (Gimeno-Blanes et al. 2016). Parameters of the time and the frequency domain often leave these dynamics out of consideration. In comparison with all other methods of nonlinear dynamics (NLD) for HRV analysis, symbolic dynamics is the method with the closest connection to physiological phenomena and is relatively easy to interpret.

Critical Assessment of the Method

HR fluctuations can be analyzed using many different methods and approaches. No single method is clearly superior to other techniques. The physiological interpretation of the results is often difficult especially in the case of nonlinear methods, because the unpredictable portion of the HR fluctuation can be due to chaotic behavior and/or stochastic component. The basic idea behind stochastic modeling is that the unpredictable component is not a perturbation but an essential part of the dynamical behavior of the system. However, symbolic dynamics gives a solid basis for Shannon entropy, i.e., with potent modulation analysis in pharmacodynamics.

Nonlinear Indexes of Cardiovascular Variability

The nonlinear theory has been growing among physiologists and physicians aiming to explain the workings of biological phenomena, highly complex, dynamic, and interdependent, where the system behavior differs from the behavior of its parts or elements (Haugaa et al. 2010).

The exponent of power-law, approximate entropy (ApEn) analysis, and detrended fluctuation (DFA) are nonlinear methods recently introduced to the study of HRV.

Entropy is a measure of randomness or disorder, as included in the second law of thermodynamics, namely the entropy of a system that tends toward the maximum. Different states of a system tend to evolve from ordered configurations to less organized settings. Referring to the time series analysis, the ApEn provides a measure of the degree of irregularity or randomness within a series of data. Entropy was originally used by Pincus (1991) as a measure of system complexity, where smaller values indicate greater regularity, and higher values lead to more disorder, randomness, and complexity of the system. For instance, with a drop in the ApEn, heart rate becomes more regular with age in both men and women.

The DFA is a technique that characterizes the variation pattern through measuring scales. DFA has been specifically developed to distinguish between intrinsic fluctuations generated by the complex system and those caused by external or environmental stimuli acting on the system. The variations that arise due to extrinsic stimulation are presumed to cause a local effect, while the intrinsic variations due to the dynamics of the system are assumed to exhibit a long-term correlation.

The analysis of the Poincare plot or Lorenz plot is considered as based on nonlinear dynamics by some authors. The Poincare plot is a two-dimensional graphical representation of the correlation between consecutive RR intervals, where each interval is plotted against the next one, and its analysis can be done qualitatively (visually) by evaluating the shape formed by its attractor, which shows the degree of complexity of the RR intervals, or quantitatively, by fitting an ellipse to the figure formed by the plot from where the indexes are taken: SD1, SD2, and SD1/SD2 ratio. SD1 represents the dispersion of points perpendicular to the line of identity and appears to be an index of instantaneous beat-to-beat variability (i.e., the short-term variability which is mainly caused by respiratory sinus arrhythmia ), while the SD2 represents the dispersion of points along the line of identity and it characterizes long-term HRV. The SD1/SD2 ratio shows the relationship between short- and long-term RR interval variations. Despite the fact that Poincare plot is primarily considered a nonlinear technique, it has been shown that SD1 and SD2 can be obtained as a combination of linear time domain HRV indexes. Therefore, alternative measures are still needed to characterize nonlinear features in Poincare plot geometry.

Fuzzy Logic Concepts

The possibility of using mathematical methods and theories for data analysis has opened up a range of possibilities for the study of pathophysiological behaviors of cardiovascular variability. Large volume of data can be more easily assessed and analyzed with fuzzy logic . In order to better understand the onset and development of important pathologies, the autonomic nervous system activity can be explored through dynamical fuzzy logic models (Fig. 7), such as the discrete-time model and the discrete-event model . Fuzzy logic approaches are able to perform nonlinear mapping or predictions involving more than one cardiovascular parameter and to explore possible relations among these parameters, which normally would not be considered as a possibility. Fuzzy logic represents a flexible system that adequately describes nonlinear and complex systems since the resulting function can be written as a weighted linear combination of the system inputs and, therefore, it can resemble a nonlinear function as needed. For this reason, fuzzy logic methods are a feasible solution to consider in the absence of prior mathematical description between input-output variables .
Fig. 7

The Bland–Altman plot of rFuzzyEn of DPV for all subjects in the two groups. The black solid line indicates the mean and the green dotted line indicates the upper and lower bounds of the mean ± 2 SD, respectively (Reprinted from Ji L, et al. Analysis of short-term heart rate and diastolic period variability using a refined fuzzy entropy method. Biomed Eng Online. 2015; 14:64 under license to BioMed Central Ltd. and CC BY License)

Considering the Sugeno Fuzzy Logic formulation, the system output z can be modeled from
$$ z=\sum i=1\mathrm{Nwi}\mathrm{zi}\sum i=1\mathrm{Nwi},z=\sum i=1\mathrm{Nwi}\mathrm{zi}\sum i=1\mathrm{Nwi}, $$
where N corresponds to the number of fuzzy rules and zi =  ∑ j = 1naixj + cizi =  ∑ j = 1naixj + ci is a linear combination of the system inputs x j, j = 1, … n. The rule weights are obtained as wi = Π j = 1nΓFij(xj)wi = Π j = 1nΓFji(xj) where Γ F i j is the membership function of rule i and input x j . Although membership functions may assume different shapes, the Gaussian function is rather a popular choice in the literature due to its symmetry and dependence on mean and variance, which correspond respectively to the center and the width of the membership function.

Fuzzy logic has the singular characteristic to combine empirical knowledge (described as linguistic rules) and knowledge directly extracted from the data, enabling an easier way to interpret the outcomes in a physiological perspective. This mathematical model may be a reliable method to evaluate the influence of the autonomic nervous system over cardiovascular control in healthy and diseased subjects.

The main advantage of the use of fuzzy logic systems comes from their power to deal adequately with the uncertainty. In particular, this approach tolerates imprecise data, and it is focused on the “plausibility ” of occurrence rather than the traditional binary response “0” or “1.” For example, while a given measurement of a certain biological variable such as stress may convey a person as being “content,” the same measurement may reveal a status of “dissatisfaction” for another one. Thus, biological variables that vary from person to person and are closely influenced by external and internal changes direct themselves toward fuzzy logic model of analysis, where the application of methods of investigation based on zero and one, true and false does not apply. Cardiovascular signals are characterized by a great intra- and inter-individual variability, besides imprecise measurements due to limited resolution of acquisition systems. Additionally, it is believed that traditional statistical methods may not capture all the information needed to describe disease in its complexity and dynamics. In this context, fuzzy logic may be a more reliable alternative to traditional methods.

Applications of Fuzzy Logics to the Analysis of Cardiovascular Variability

fuzzy logic approaches have been recently used in the cardiovascular field in different contexts including applications in signal processing and monitoring, classification, prediction, or control. One approach consists of extracting the relevant features from one or more cardiovascular signals, which are then integrated into a fuzzy logic scheme aiming at the identification of the presence or the quantification of a pathological state.

Fuzzy logic methods have been successfully integrated in control systems. For instance during anesthesia, mean arterial pressure was controlled based on the error between desired and measured values, allowing it to control the balance between the unconsciousness and the side effects caused by the hypnotic drug. Also during anesthesia, hemodynamic changes were successfully modeled considering drug dose level alterations as inputs of the fuzzy system. In hemodialysis condition, fuzzy logic has also shown to be capable of effectively control blood pressure trends, using ultrafiltration rate as input. Such a system allowed an overall reduction of 40% of the most severe episodes in hypotension-prone subjects.

Abnormal cardiac rhythms have been identified using artificial neural network and fuzzy interactions based on nonlinear heart period R-R features, such as spectral entropy , Poincare SD1/SD2, and Lyapunov exponent . Also based on R-R features, fuzzy logic was used for ECG beat classification to detect arrhythmic and ischemic heartbeats. Fuzzy logic approaches showed efficiency in improving oscillometric cuff pressure measurements by properly detecting outliers and noise artifacts.

With the goal of evaluating autonomic nervous system function, fuzzy logic has been used to choose the optimum subset of time, frequency, and nonlinear variables related to sympathetic and parasympathetic activities on HRV. Fuzzy logic approach has been used in a classification scheme to jointly evaluate results of several autonomic tests, e.g., head-up tilt test and active postural change, using both time and spectral analysis of heart rate and of diastolic blood pressure series. Similar fuzzy logic schemes were used for the information fusion of relevant features extracted from multimodal cardiovascular signals, such as heart period R-R and systolic blood pressure, for the detection of life-threatening states in cardiac care units.

Recently, fuzzy logic methods have been employed to effectively describe blood pressure and heart period R-R coupling and, therefore, have the potential to improve time domain baroreflex sensitivity (BRS) estimation. The autoregressive linear analysis approach for BRS estimation has limitations when cardiovascular regulation is depressed. Liu et al. proposed a hybrid model consisting of a parallel modular structure with an autoregressive and a fuzzy logic system, to study simultaneously linear and nonlinear heart rate and blood pressure coupling mechanisms (Liu et al. 2008). This approach illustrates the utility of combining more traditional methods with fuzzy logic, which could be of advantage in diseased conditions when cardiovascular system regulation is afflicted.

Time domain BRS methods based on spontaneous data typically assume blood pressure and heart period R-R linearity and provide single slope estimation, regardless of the blood pressure value. In this context, fuzzy logic methods can contribute to establish a BRS dependent of blood pressure level, similarly to time domain blood pressure pharmacological methods. Recently, fuzzy logic has been used to analyze spontaneous R-R series as a function of blood pressure values, comparing performances in real and surrogate data.

Critical Assessment of the Method

The optimized definition and number of symbols have to be validated on larger clinical studies with more patients involved. It is necessary to check which symbol definition has to be adapted by applying symbolic dynamics to patients with atrial fibrillation .

The renormalized entropy (ReEn) , as a measure of a relative degree of order, has to find stationary periods in the time series. The influence of instationarities can theoretically lead to misinterpretation due to contradictory results.

Strain Imaging on Echocardiography

Purpose and Rationale

Strain and strain rate are novel imaging techniques that measure changes in length and/or thickness of myocardial fibers. Those methods have been incorporated into routine clinical practice only since recently.

Procedure

Strain is ideally suited to quantify myocardial function regionally, but with the introduction of speckle tracking, a new parameter for global left ventricular (LV) function assessment called “global strain” has been introduced (Iwano et al. 2011). In the longitudinal direction, global longitudinal strain reflects the deformation along the entire LV wall which is visible in an apical image. The measurements from all three apical views are combined to give an average global longitudinal strain (GLS) value (Haugaa et al. 2013).

Evaluation

Strain is defined as the fractional change in length of a myocardial segment relative to its baseline length, and it is expressed as a percentage. Strain rate is the temporal derivative of strain, and it provides information on the speed at which the deformation occurs. Strain is a vector and the complete description of the complex deformation of a piece of myocardium requires three normal and six shear strain components. For practical reasons, the normal strains which are preferred for clinical use are oriented along the coordinate system of the LV; they describe radial thickening and thinning as well as circumferential and longitudinal shortening and lengthening. Lengthening or thickening of the myocardium is represented by positive strain values, whereas negative values represent shortening or thinning. The most commonly used parameter is longitudinal strain, which can be expected to be around 20% in all regions of the LV.

It must be noted that myocardial deformation is load dependent (Klaeboe et al. 2017). Therefore, strain and strain rate measurements must be interpreted considering ventricular wall thickness and shape as well as pre- and after-load.

GLS lower limit of normality has been established in −18% (Kocabay et al. 2014).

Critical Assessment of the Method

Although speckle-tracking echocardiography (STE) (Fig. 8) has significantly contributed to improve the evaluation of LV function and has the potential to improve SCD risk stratification (Leren et al. 2017), certain limitations of the technique should be stated. First, strain has been considered a less load dependent measure; however, variations in loading conditions can lead to different results, this is important in patients with acutely decompensated heart failure in which therapy and improvement in loading conditions might lead to different values. Second, as calculations for strain-derived parameters are derived from 2D images, the presence of artifacts (shadowing, reverberations) can lead to inadequate tracking and inaccurate strain and mechanical dispersion (MD) values (Fig. 9). This is especially true when several segments are not correctly tracked. Third, interchangeability of strain among different vendors, and software vendors is also an issue important to take into account, as values have shown to be different among them. The impact of this issue regarding mechanical dispersion has not been specifically addressed; however, as tracking algorithms differ among vendors, MD might be very likely also affected by this issue. Lastly, the adequate measurement of strain needs training and results from less experienced operators differ from more experienced ones.
Fig. 8

Global longitudinal strain and mechanical dispersion in a patient with hypertrophic cardiomyopathy (Reprinted from: Book “Sudden cardiac death: Predictors, Prevalence and Clinical Perspectives”, Chapter “The role of novel echocardiographic techniques for primary prevention of sudden cardiac death”, pp. 267–86, 2017, Editor Ivana I Vranic, with permission from Nova Science Publishers, Inc. New York)

Fig. 9

Electrical and mechanical dyssynchrony coupling demonstrated by UHFQRS and speckle tracking echocardiography (STE) in patient 2 suffering from LBBB. The figure compares the UHF electrical dyssynchrony and the mechanical dyssynchrony of the septum and LV lateral wall. Myocardial shortening is coded by the orange/red color and myocardial lengthening by the blue color. (a) UHFQRS, V1 (blue) and V6 (green) leads. (b) Normalized UHFQRS map. (c) Detail from STE map temporally synchronized with A and B. (d) V1-V6 ECG. UHFDYS and UHFDYS ALL electrical dyssynchrony are 61 ms and 74 ms, respectively, – black horizontal bars. (a) The time delay of mechanical motion between the onset of myocardial deformation of the middle septum and the middle lateral wall is 87 ms – orange horizontal bar. (c) The green horizontal bar defines delay 48 ms between the first electrical UHF activation in V2 lead and onset of mechanical myocardial deformation of the middle septum (Reprinted from Jurak P, et al. Ventricular dyssynchrony assessment using ultra-high frequency ECG technique. J Interv Card Electrophysiol. 2017; 49(3):245–254 under license to CC BY License)

Myocardial Mechanical Dispersion

Purpose and Rationale

The diagnosis of mechanical dyssynchrony (Fig. 10) induced by the presence of infarction scar and/or conduction abnormalities in patients with an ejection fraction (EF) of < 35% may be associated with a greater propensity for inducing serious ventricular arrhythmia (ventricular tachycardia (VT), ventricular fibrillation (VF)) and sudden cardiac death (Claus et al. 2015). The assessment of regional myocardial function using tissue Doppler echocardiography (TDE) allows for noninvasive analysis of the regional mechanical dysfunction (LV mechanical dispersion) (Abduch et al. 2014).
Fig. 10

Examples of different ventricular electrical activation patterns. (a) Averaged QRS complexes, V leads. (b) Averaged UHFQRS of leads V1 (blue) and V6 (green). (c) UHFQRS maps. From left: healthy heart QRSd 81 ms, patient 3 – RBBB, QRSd 139 ms, patient 4 – LBBB, QRSd 190 ms, and patient 5 with WPW syndrome with right lateral accessory pathway QRSd 105 ms (Reprinted from Jurak P, et al. Ventricular dyssynchrony assessment using ultra-high frequency ECG technique. J Interv Card Electrophysiol. 2017; 49(3):245–254 under license to CC BY License)

Procedure

The time to maximum myocardial shortening, including postsystolic shortening, if present, is measured from the ECG onset Q/onset R-wave in each of 16 segments of left ventricle. The maximum myocardial shortening from a representative strain curve with a shortening duration of a minimum of 50 ms is used in the time analyses. Segments in which no shortening is present are excluded. To quantify LV mechanical dispersion, the SD of the 16 different time intervals to maximum myocardial shortening is used; this parameter is defined as mechanical dispersion. An alternative measure for mechanical dispersion is the difference between the longest and shortest time interval from ECG onset Q/onset R-wave to the maximum myocardial shortening in each individual. This parameter was defined as the delta contraction duration.

Evaluation

Clinical implications of measurements of mechanical dispersion and global strain in post-myocardial infarction (MI) patients add important information about the risk of arrhythmia beyond the EF. Importantly, in patients with a preserved or slightly reduced EF, mechanical dispersion of 70 ms identified post-MI patients with an increased risk of life-threatening arrhythmias. According to current guidelines for primary prevention, post-MI patients with an EF 35% should be considered for ICD therapy. The novel principles might be useful to identify the risk of arrhythmias in post-MI patients with relatively preserved EF who do not fulfill current ICD indications (EF 35%).

Critical Assessment of the Method

Future trials should investigate whether mechanical dispersion and global strain can be used to select additional patients for ICD therapy among the majority of post-MI patients with a relatively preserved EF in whom current ICD indications fail.

Systolic Function

The systole extends from the end of the late diastolic filling (closure of the mitral valve) to the start of the next isovolumetric systolic relaxation phase (closure of the aortic valve); therefore, it includes the isovolumetric contraction phase (until opening of the aortic valve) and the ejection phase(s); the right ventricle contracts first, then shortly followed by the left ventricle.

Performance and energy requirements of the heart muscle and heart pump depend on preload (ventricular filling), heart rate, afterload (the “load” that the heart must eject blood against aortic input impedance as defined by total peripheral resistance, arterial conductivity and distensibility , and wave reflections), and inotropy (load and heart rate independent performance).

Systolic Time Intervals

Purpose and Rationale

Systolic time intervals (STI) are the time equivalents of the electromechanical systolic (forward) pump performance.

Procedure

Relevant segments can be derived from the simultaneous high-speed registration of the electrocardiogram (ECG), phonocardiogram (PCG) , carotid pulse mechano-cardiogram, impedance cardiogram (ZCG) , or by echocardiography. Although there is some delay between central events and their peripheral reflection, this has relatively little impact on the accuracy of the estimation of the timing of central events. The preejection period (PEP) corresponds to the duration of the isovolumetric contraction phase from the start of the ECG Q-wave up to the start of the ejection (opening of the aortic valve, between the first and second component of the first PCG heart sound); the left ventricular ejection time (LVET) from the start of the systolic ejection (end of PEP) up to the end of the ejection (closure of the aortic valve, between the first and second component of the second PCG heart sound, nadir of the carotid pulse wave, nadir of the dZ/dt-curve by ZCG , etc.); the total electromechanical systole (QS2) then corresponds to the sum of PEP and LVET.

An increase in HR shortens STIs, LVET, and QS2 in particular, whereas the PEP is less HR-dependent. Accordingly, there are numerous attempts to “correct” STI for HR (STIc).

Evaluation

The PEP reflects the isovolumetric contraction time (ICT) ; the PEP is shortened by an increase in HR, an increase in preload (ventricular filling), a decrease in afterload, and by a positive inotropic stimulation. Accordingly, the PEP is particularly sensitive to medications that induce inotropic stimulation and vasodilatation (“inodilators”), provided there is no restriction of venous return. Inotropic stimulation increases the ventricular ejection time (VET) only slightly; accordingly, the shortening of the QS2c and the reduction of the PEP/VET-ratio (“Weissler-Index ”), which are often propagated as “contractility indices,” are predominantly defined by the shortening of the PEP. A reduction in afterload shortens the PEP, prolongs the HR-corrected VET with a reduction of the PEP/VET-ratio, whereas the QS2c is hardly changed. Vasodilatation-induced changes in STI are hardly changed by concomitant beta-adrenoceptor blockade and atropine; therefore, PEP and VETc can be assumed to be (also) highly afterload dependent, whereas the QS2c is not.

Normally, the electrocardiographic QT-interval is shorter than the QS2. Adrenergic stimulation and other forms of inotropic stimulation prolong the QT-interval relatively to the shortening of the QS2. Accordingly, the shortening of the QS2/QT-ratio has been propagated as one of the many “contractility indices.” There have been some early applications in clinical cardiology, but no application in cardiovascular clinical pharmacology.

Critical Assessment of the Method

HR-corrections of STI are based on historic linear regressions in quite small samples. It is doubtful that these equations are stable and universal. Indeed, it is hardly likely nor can it be verified that they can be extrapolated to further subjects and different experimental conditions. Furthermore, these HR-corrected STIs are meaningless mechanically since HR is an intrinsic determinant of pump action, performance, and efficiency. A shortening of the PEP or QS2 should only then be accepted as an index of enhanced “contractility” if a simultaneous change of vascular load can be excluded.

The value of STI in cardiovascular clinical pharmacology relates particularly to their excellent reproducibility and high pharmacosensitivity: STI have been used in clinical cardiology to monitor progressing pump dysfunction including iatrogenic cardiomyopathies; in cardiovascular clinical pharmacology, STI have been used to characterize cardiotonics, negative inotropics, reduction in preload, and stress interventions.

STI have been very important in the late 1980s and throughout the 1990s for the noninvasive characterization of drug effects on systolic performance. Now, such methods appear antiquated also since there are no modern state-of-the art devices to measure and analyze STI.

Myocardial Performance Index (Tei)

Purpose and Rationale

The echocardiographic myocardial performance or “Tei” index (MPI) is the modern analogue of the STI.

Procedure

MPI is based on the estimates of the isovolumetric contraction and isovolumetric relaxation time (ICT and IRT) and ejection time (ET) obtained by pulsed-wave Doppler (PWD) or tissue Doppler echocardiography of the mitral annulus (TDE).

Evaluation

Doppler echocardiographic ICT, IRT, ET, and MPI are important tools in clinical cardiology for the noninvasive follow-up of patients with myocardial infarction, major cardiac surgery, and after heart transplantation.

Critical Assessment of the Method

These methods have the important add-on advantage to assess both systolic and diastolic function and to be able to distinguish between left and right ventricular function.

The MPI (= (ICT + IRT)/ET) estimates were shown to have high diagnostic accuracy for heart failure, but with distinct and method-specific diagnostic cut-offs. The methods rely on a very high level of analyst expertise: they are observer dependent and not economic; the latter aspects might explain why such methods find little application in the experimental evaluation of cardiac drug effects, in spite of the wealth of information that could be gained.

Noninvasive Estimates of Stroke Volume and Cardiac Output

Purpose and Rationale

The stroke volume (SV) and cardiac output (CO = HR × SV) are the volume equivalents of the systolic cardiac pump function.

Procedure

Several noninvasive methods have been investigated and propagated for the experimental investigation of SV and CO:
  • Carbon dioxide rebreathing (indirect Fick method).

  • Transthoracic impedance cardiography (ZCG).

  • Diastolic pulse contour analysis (“PCA”), i.e., analysis of noninvasive radial artery pulse wave forms by means of a third-order, 4-element modified Windkessel model of the circulation quantifying the Windkessel model criteria: systemic vascular resistance (SVR), large artery “capacitive” compliance (C1), small artery “oscillatory”/“reflective” compliance or “reflectance” (C2), and inductance (L – inertance of blood). This method uses an estimate of SV from the ejection time (ET), heart rate (HR), body surface area (BSA), and age, and all PCA-criteria (SVR, C1, C2, and L) rely on this estimate (and the constraints of its algorithmic simplicity).

  • Systolic pulse wave analysis (“PWA”): reconstruction of the pulse wave form of the ascending aorta from distant (carotid/brachial/radial) pulse wave contours by means of a validated general transfer function (GTF) deriving the central augmentation index (AIx), the time to wave reflection (Tr as a measure of central aortic compliance), and algorithmic estimates of central hemodynamics .

  • Echocardiographic techniques: M-mode echocardiography, two-dimensional echocardiography, three-dimensional echocardiography.

  • Transthoracic pulsed wave Doppler echography of the aorta ascendens, transoesophageal Doppler echography, etc.

Evaluation

The older devices required tedious signal analysis and complex nonautomated signal and data processing, which relied on public algorithms; newer methods are mostly highly automated “black-boxes” with proprietary algorithms that often are device-specific nonpublic “adaptations” of the original algorithms.

Critical Assessment of the Method

Invasive measurements of SV and CO are method-specific estimates relying on a “black-box” analysis of the dilution of a controlled injection of dye or a cooled volume of saline (“thermodilution”).

In intensive care medicine, newer methods have been introduced that are called “minimally” invasive: they provide for continuous hemodynamic monitoring without repeated central catheter dilution; they monitor systolic function based on wave/contour analysis of (invasive) arterial peripheral pulses with or without calibration with pulmonary artery thermodilution. The surge of “minimally” invasive methods also illustrates (1) the need for reliable methods for continuous monitoring and (2) the lack of satisfaction with and acceptance of truly noninvasive methods to meet this requirement.

The related constraints are illustrated in the following by the past and present positioning of transthoracic impedance cardiography (ZCG) in the clinical pharmacological characterization of investigational changes in cardiovascular function.

ZCG is based on the observations in the 1930s and 1940s that typical changes occur in transthoracic impedance (Z) to a high-frequency low-voltage alternating current (AC) applied through the thoracic cage during the cardiac cycle; these changes were originally primarily seen as the consequence of volume shifts with an increase in volume and decrease in impedance during systole and a decrease in volume and increase in impedance during diastole; now it is understood that the contour of the time course of the negative velocity of the transthoracic impedance changes (dZ/dt) is analogous with the blood flow velocity in the central large vessels and the differential of the carotid pulse curve, while also including venous and right ventricular components. In clinical cardiology, there was little interest in such rheological plethysmographic concepts because of the various invasive methods that became available. The need for noninvasive monitoring methods in the aerospace industry led to the first impedance cardiographic applications.

The registration of the ZCG signals is not observer-dependent, but the analysis of the signals (delineation of the ejection time and measurement of dZ/dtmax) is. Originally, ZCG analyses also included an assessment of STI and therefore required the simultaneous registration of at least three signals (ECG, ZCG, PCG); the ZCG signal has points of repair to delineate the start and the end of the LVET, albeit that these are more easily and accurately identified if the PCG and carotid pulse curve (4-channel method) are recorded as well. In the early 1980s, this approach, which required tedious 3- or 4-channel signal analysis, was frequently used in cardiovascular clinical pharmacology, since it permitted an almost continuous monitoring. In the mid-1980s, an alternative method became popular; it was particularly attractive since it used less inconvenient spot rather than adhesive tape electrodes, was fully automated, and relied only on the ECG and ZCG; furthermore, this method used its own physiologic algorithm and equations to estimate SV, the results of which disagree grossly with those according to the conventional equation by Kubicek applied on the same signals; furthermore, the lack of support information (PCG and/or carotid pulse curve) makes the method less accurate in estimating LVET and, accordingly, SV.

Conventional ZCG is well reliable and highly sensitive for drug effects, inodilatory effects in particular; they may agree with other invasive and noninvasive methods but often appear to overestimate SV and the changes thereof. The alternative methods have a similarly high reproducibility and are sensitive but may be less accurate in estimating LVET and, accordingly, SV. However, all three have limited validity since they yield method- and device-specific estimates of SV that are not unlikely to be affected by substantial method/subject/effect-interaction.

The fate of ZCG is exemplary for most noninvasive cardiovascular methods: they are method- and device-specific estimates that may be very reproducible and sensitive, for drug effects in particular; they have a limited validity since they do not generally agree well with the established golden standards; this per se does not preclude their usefulness, provided this limitation is understood and accounted for, also since the golden standards may prove impractical or impossible to use in similar collectives. However, in order to be useful, these methods need to be accepted as such. In drug development, this means that data generated with such methods need to be useful and acceptable for regulatory purposes. However, with the exception of ICH E14, there is no regulatory need or benefit in pursuing cardiovascular endpoints in early development studies. In the framework of “lean” drug development, this means that there is little demand for such studies. Accordingly, it has become difficult to improve their hardware and software to meet present-day quality standards and to keep the required operational expertise. Due to these latter constraints, it has become even more difficult to satisfy regulatory requirements. In consequence, several of these methods, although evidenced to be highly informative, are no longer available. Newer methods, especially those related to pulse wave velocity and pulse wave contour analysis or Doppler echocardiography may find a similar fate unless they find high acceptance in clinical cardiology.

Diastolic Performance

Purpose and Rationale

The diastole extends from the end of the systolic ejection (closure of the aortic valve) to the start of the next isovolumetric systolic contraction phase (closure of the mitral valve); therefore, it includes the isovolumetric relaxation phase (until opening of the mitral valve); the rapid filling phase, which begins when LV pressure falls below left atrial pressure and the opening of the mitral valve and involves interaction between LV suction (=active relaxation) and viscoelastic properties of the myocardium (= compliance); diastasis, i.e., when left atrial and left ventricular pressures are almost equal and left ventricular filling is essentially maintained by the flow coming from pulmonary veins using the left atrium as a passive conduit; and atrial systole, which corresponds to left atrial contraction and ends with the closure of the mitral valve. The diastole is far more dependent on the HR than the systole and the diastolic filling lasts longer when the HR is slower.

According to the European Cardiology Society, establishment of the diagnosis of diastolic heart failure requires: (1) the presence of a clinical syndrome of heart failure (dyspnea or fatigue at rest or with exertion, fluid overload, pulmonary vascular congestion on examination, or X-ray); (2) demonstration of an ejection fraction 50%; and (3) demonstration of diastolic dysfunction (The European Study Group on Diastolic Heart Failure 1998). Others prefer the term “heart failure with a normal ejection fraction” (HFNEF), characterized by elevated ventricular filling pressures and abnormal filling patterns to allow for a better distinction between active and passive components, emphasizing that HFNEF may occur with or without impairment of the isovolumetric relaxation (active dysfunction).

Removal of calcium from the myofilaments and uncoupling of actin–myosin cross-bridge bonds govern the rate of myocardial relaxation and thus the rate of ventricular pressure decline. This active component of diastole is typically characterized by the time constant of relaxation (t), determined by fitting a mono-exponential curve to the isovolumetric section of the ventricular pressure curve. Subsequently, the mechanical properties of the ventricle are determined by passive factors, such as the degree of myocellular hypertrophy (myocardial mass), cytoskeletal and extracellular matrix properties, and chamber geometry; this is reflected by the end-diastolic pressure–volume relationship (EDPVR) and the features derived from it: ventricular chamber stiffness (i.e., slope of EDPVR at a given volume [dP/dV]) and compliance (the mathematical reciprocal of stiffness). Both are load dependent and are no measures of load-independent diastolic function (lusitropy). In consequence, diastolic dysfunction may involve either or both active or passive ventricular properties. With an increased t (which is typically observed with all forms of hypertrophy, and with aging), a higher mean left atrial pressure may be required to achieve normal filling volumes, especially at high heart rates. However, an increased t is not ubiquitously associated with elevated mean left atrial pressure and heart failure. Instead, shifts of the EDPVR have been suggested to be a predominant factor of the hemodynamic and symptomatic abnormalities of heart failure in HFNEF: a leftward/upward shifted EDPVR is indicative of decreased chamber capacitance, whereas a rightward/downward-shifted EDPVR (increased ventricular capacitance) occurs in all forms of dilated cardiomyopathy (remodeling). Accordingly, there are various conditions with distinctly different properties of the passive and/or active diastolic components that may result in HFNEF (The European Study Group on Diastolic Heart Failure 1998).

Procedure

An in-depth analysis of diastolic function requires invasive investigations to assess the pressure–volume relation along the overall cardiac cycle, which permits to derive t, end-diastolic stiffness, etc.

Noninvasively, Doppler ultrasound recordings of transmitral and pulmonary venous flow velocities and time intervals are useful alternatives, and Doppler echocardiography has become the primary tool for identifying and grading the severity of diastolic dysfunction in patients demonstrating elevated ventricular filling pressures and abnormal filling patterns.

This involves the determination of the early diastolic velocity (E) , atrial velocity (A) , deceleration time of E velocity (DT) , and the isovolumetric relaxation time (IVRT) from the transmitral Doppler signals. Complementary evaluation of pulmonary venous flow might be of interest; further methods rely on tissue Doppler technology and color M-mode derived flow propagation rate. These investigations are carried out at rest with controlled maneuvers (Valsalva, leg lifting).

Evaluation

In contrast to inotropic changes, lusitropic changes of diastolic function are not regularly investigated and characterized except in patients with post-myocardial infarction dysfunction and other forms of heart failure. Furthermore, there are no drugs that are targeted specifically on improving diastolic function, albeit that ancillary positive lusitropic properties have been demonstrated for some medications.

Investigation of diastolic properties might be of interest in differentiating responsiveness to therapy and lack thereof in the evaluation of treatments of heart failure, but is only rarely used in this context.

In hypertension, diastolic function is also of interest since diastolic dysfunction is inherent to concentric left ventricular remodeling that is commonly seen in hypertensives.

Critical Assessment of the Method

Load-independent diastolic function (lusitropy) suffers from the same conceptual validity constraints as inotropy (load and heart rate independent systolic function). The lack of distinction between true lusitropy and passive components of diastolic performance is obvious. However, this does not preclude that the procedures to characterize ventricular relaxation and filling (even if composite and ambiguous criteria) provide a better understanding of the overall cardiac function.

Echocardiographic Evaluation of Coronary Flow Reserve

Purpose and Rationale

Coronary flow reserve (CFR) can be assessed by echocardiography through direct measurement of coronary blood flow velocity at rest and during adenosine stress test at the window of distal left anterior descending artery, using transthoracic Doppler signal and could be applied and used to calculate the ratio between peak test velocity/baseline velocity which correlates with invasively measured coronary flow reserve (CFR).

Procedure

Routinely performed in larger cardiosurgery centers. CFR is performed with several techniques (PET, MRI, and Doppler echocardiography). Peak stress blood flow measured by ergometry alone is a less powerful predictor of outcomes than CFR, possibly because CFR taken as the ratio of peak stress and rest blood flows may better isolate vasodilator capacity and reduce systematic errors in measurement.

Evaluation

In the evaluation of need for surgery, CFR is measured in culprit arteries. If CFR < 2, then bypass surgery is indicated. CFR > 2.0 is for medicamentous treatment. Noninvasive assessment of coronary vasodilator function provides incremental risk stratification beyond routine measures of clinical risk, including estimates of LV systolic function and the extent and severity of myocardial ischemia and scar, and results in a meaningful incremental risk reclassification of patients with known or suspected CAD.

Critical Assessment of the Method

A CFR ≤ 2.7 by transthoracic echocardiography has demonstrated good accuracy (87% specific and 82% sensitive) for detecting CAV. In addition, echocardiographic CFR has been reported to have prognostic value for CAV-related major cardiac events (3.3 relative risk of death, myocardial infarction, congestive heart failure, or need for percutaneous intervention at a mean of 19 months). A CFR < 2.9 can detect a maximal intimal thickness of ≥0.5 mm by intravascular ultrasound (IVUS) with 80% sensitivity, 100% specificity, and 89% negative predictive value.

References and Further Reading

  1. Abdelghani SA, Rosenthal TM, Morin DP (2016) Surface electrocardiogram predictors of sudden cardiac arrest. Ochsner J 16(3):280–289PubMedPubMedCentralGoogle Scholar
  2. Abduch MC, Alencar AM, Mathias W Jr et al (2014) Cardiac mechanics evaluated by speckle tracking echocardiography. Arq Bras Cardiol 102(4):403–412PubMedPubMedCentralGoogle Scholar
  3. Bauer A, Malik M, Schmidt G et al (2008a) Heart rate turbulence: standards of measurement, physiological interpretation, and clinical use: international society for holter and noninvasive electrophysiology consensus. J Am Coll Cardiol 52(17):1353–1365CrossRefPubMedGoogle Scholar
  4. Bauer A, Malik M, Schmidt G et al (2008b) Heart rate turbulence: standards of measurement, physiological interpretation, and clinical use: international society for holter and noninvasive electrophysiology consensus. J Am Coll Cardiol 52(17):1353–1365CrossRefPubMedGoogle Scholar
  5. Butlin M, Qasem A (2017) Large artery stiffness assessment using sphygmoCor technology. Pulse 4(4):180–192CrossRefPubMedGoogle Scholar
  6. Chia YC, Buranakitjaroen P, Chen CH et al (2017) Current status of home blood pressure monitoring in Asia: statement from the HOPE Asia network. J Clin Hypertens.  https://doi.org/10.1111/jch.13058. [Epub ahead of print]
  7. Chiang CE, Wang TD, Lin TH et al (2017) The 2017 focused update of the guidelines of the Taiwan society of cardiology (TSOC) and the Taiwan hypertension society (THS) for the management of hypertension. Acta Cardiol Sin 33(3):213–225PubMedPubMedCentralGoogle Scholar
  8. Claus P, Omar AMS, Pedrizzetti G et al (2015) Tissue tracking technology for assessing cardiac mechanics: principles, normal values, and clinical applications. JACC Cardiovasc Imaging 8(12):1444–1460CrossRefPubMedGoogle Scholar
  9. Coris EE, Moran BK, De Cuba R et al (2016) Left ventricular non-compaction in athletes: to play or not to play. Sports Med 46(9):1249–1259CrossRefPubMedGoogle Scholar
  10. Fung E, Järvelin MR, Doshi RN (2015) Electrocardiographic patch devices and contemporary wireless cardiac monitoring. Front Physiol 6:149CrossRefPubMedPubMedCentralGoogle Scholar
  11. Garcia EV, Pastore CA, Samesima N et al (2011) T-wave alternans: clinical performance, limitations and analysis methodologies. Arq Bras Cardiol 96(3):e53–e61CrossRefPubMedGoogle Scholar
  12. Gimeno-Blanes FJ, Blanco-Velasco M, Barquero-Pérez Ó et al (2016) Sudden cardiac risk stratification with electrocardiographic indices – a review on computational processing, technology transfer, and scientific evidence. Front Physiol 7:82CrossRefPubMedPubMedCentralGoogle Scholar
  13. Gulizia MM, Casolo G, Zuin G et al (2016) ANMCO/AIIC/SIT consensus document: definition, precision and appropriateness of the electrocardiographic signal of electrocardiographic recorders, ergometry systems, Holter systems, telemetry and bedside monitors. G Ital Cardiol 17(6):393–415Google Scholar
  14. Haugaa KH, Smedsrud MK, Steen T et al (2010) Mechanical dispersion assessed by myocardial strain in patients after myocardial infarction for risk prediction of ventricular arrhythmia. JACC Cardiovasc Imaging 3(3):247–256CrossRefPubMedGoogle Scholar
  15. Haugaa KH, Grenne BL, Eek CH et al (2013) Strain echocardiography improves risk prediction of ventricular arrhythmias after myocardial infarction. JACC Cardiovasc Imaging 6(8):841–850CrossRefPubMedGoogle Scholar
  16. Huikuri HV, Stein PK (2013) Heart rate variability in risk stratification of cardiac patients. Prog Cardiovasc Dis 56(2):153–159CrossRefPubMedGoogle Scholar
  17. Huikuri HV, Perkiömäki JS, Maestri R et al (2009) Clinical impact of evaluation of cardiovascular control by novel methods of heart rate dynamics. Philos Trans A Math Phys Eng Sci 367(1892):1223–1238CrossRefPubMedGoogle Scholar
  18. Iwano H, Yamada S, Watanabe M et al (2011) Novel strain rate index of contractility loss caused by mechanical dyssynchrony. A predictor of response to cardiac resynchronization therapy. Circ J 75(9):2167–2175CrossRefPubMedGoogle Scholar
  19. James PA, Oparil S, Carter BL et al (2014) 2014 evidence-based guideline for the management of high blood pressure in adults: report from the panel members appointed to the eighth joint National Committee (JNC 8). JAMA 311(5):507–520CrossRefPubMedGoogle Scholar
  20. Klaeboe LG, Haland TF, Leren IS et al (2017) Prognostic value of left ventricular deformation parameters in patients with severe aortic stenosis: a pilot study of the usefulness of strain echocardiography. J Am Soc Echocardiogr 30(8):727–735CrossRefPubMedGoogle Scholar
  21. Kocabay G, Muraru D, Peluso D et al (2014) Normal left ventricular mechanics by two-dimensional speckle-tracking echocardiography. Reference values in healthy adults. Rev Esp Cardiol (Engl Ed) 67(8):651–658CrossRefGoogle Scholar
  22. Lee PY, Liew SM, Abdullah A et al (2015) Healthcare professionals’ and policy makers’ views on implementing a clinical practice guideline of hypertension management: a qualitative study. PLoS One 10(5):e0126191.  https://doi.org/10.1371/journal.pone.0126191. eCollection 2015CrossRefPubMedPubMedCentralGoogle Scholar
  23. Leren IS, Saberniak J, Haland TF et al (2017) Combination of ECG and echocardiography for identification of arrhythmic events in early ARVC. JACC Cardiovasc Imaging 10(5):503–513CrossRefPubMedGoogle Scholar
  24. Liu J, McKenna TM, Gribok A et al (2008) A fuzzy logic algorithm to assign confidence levels to heart and respiratory rate time series. Physiol Meas 29(1):81–94CrossRefPubMedGoogle Scholar
  25. Luebbert J, Auberson D, Marchlinski F (2016) Premature ventricular complexes in apparently normal hearts. Card Electrophysiol Clin 8(3):503–514CrossRefPubMedGoogle Scholar
  26. Mc Kinstry B, Hanley J, Lewis S (2015) Telemonitoring in the management of high blood pressure. Curr Pharm Des 21(6):823–827CrossRefPubMedGoogle Scholar
  27. O’Brien E, Sheridan J, O’Malley K (1988) Dippers and non-dippers. Lancet 2(8607):397CrossRefPubMedGoogle Scholar
  28. O’Brien E, Petrie J, Littler W et al (1993) An outline of the revised British hypertension society protocol for the evaluation of blood pressure measuring devices. J Hypertens 11(6):677–679CrossRefPubMedGoogle Scholar
  29. O’Brien E, Coats A, Owens P et al (2000) Use and interpretation of ambulatory blood pressure monitoring: recommendations of the British hypertension society. BMJ 320(7242):1128–1134CrossRefPubMedPubMedCentralGoogle Scholar
  30. Olsen FJ, Biering-Sørensen T, Krieger DW (2015) An update on insertable cardiac monitors: examining the latest clinical evidence and technology for arrhythmia management. Futur Cardiol 11(3):333–346CrossRefGoogle Scholar
  31. Pincus SM (1991) Approximate entropy as a measure of system complexity. Proc Natl Acad Sci U S A 88(6):2297–2301CrossRefPubMedPubMedCentralGoogle Scholar
  32. Shibao C, Lipsitz LA, Biaggioni I (2013) Evaluation and treatment of orthostatic hypotension. J Am Soc Hypertens 7(4):317–324CrossRefPubMedPubMedCentralGoogle Scholar
  33. Shrout T, Rudy DW, Piascik MT (2017) Hypertension update, JNC8 and beyond. Curr Opin Pharmacol 33:41–46CrossRefPubMedGoogle Scholar
  34. Sokolow M, Werdegar D, Kain HK et al (1966) Relationship between level of blood pressure measured casually and by portable recorders and severity of complications in essential hypertension. Circulation 34(2):279–298CrossRefPubMedGoogle Scholar
  35. Thomas GP, Daichi S, Haas D (2006) Ambulatory blood-pressure monitoring. N Engl J Med 354:2368–2374CrossRefGoogle Scholar
  36. Verdecchia P, Angeli F, Gattobigio R (2004) Clinical usefulness of ambulatory blood pressure monitoring. J Am Soc Nephrol 15(Suppl 1):S30–S33CrossRefPubMedGoogle Scholar
  37. Verrier RL, Ikeda T (2013) Ambulatory ECG-based T-wave alternans monitoring for risk assessment and guiding medical therapy: mechanisms and clinical applications. Prog Cardiovasc Dis 56(2):172–185CrossRefPubMedGoogle Scholar
  38. Voss A, Kurths J, Kleiner HJ et al (1996) The application of methods of non-linear dynamics for the improved and predictive recognition of patients threatened by sudden cardiac death. Cardiovasc Res 31(3):419–433CrossRefPubMedGoogle Scholar
  39. Watanabe MA (2003) Heart rate turbulence: a review. Indian Pacing Electrophysiol J 3(1):10–22PubMedPubMedCentralGoogle Scholar
  40. Wood PW, Boulanger P, Padwal RS (2017) Home blood pressure telemonitoring: rationale for use, required elements, and barriers to implementation in Canada. Can J Cardiol 33(5):619–625CrossRefPubMedGoogle Scholar
  41. Zhou JC, Zhang N, Zhang ZH et al (2017) Intensive blood pressure control in patients with acute type B aortic dissection (RAID): study protocol for randomized controlled trial. J Thorac Dis 9(5):1369–1374CrossRefPubMedPubMedCentralGoogle Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  1. 1.Clinical Center of Serbia, Faculty of MedicineUniversity of BelgradeBelgradeSerbia

Personalised recommendations