An integrated fault detection and exclusion scheme to support aviation navigation

  • Yawei ZhaiEmail author
  • Xingqun Zhan
  • Jin Chang
Original Paper


This paper proposes a novel integrity monitoring scheme against global navigation satellite systems (GNSS) fault for civil aviation navigation. The main contributions are (a) developing an efficient user algorithm that integrates fault detection and exclusion (FDE) functions, and (b) deriving the analytical methods to quantify its corresponding integrity risk. The intended application of the new scheme is advanced receiver autonomous integrity monitoring (ARAIM), which is proposed by the United States (U.S.) and European Union (E.U.), and will serve as the next generation of the main aviation navigation means. In this new approach, the exclusion decision-making process is unified into the first layer detection step, thereby dramatically improving efficiency. The principle of this method is utilizing the multi-dimensional parity vector projections in parity space to extract the information of faults. In this work, we derive the projection matrix for single satellite failure modes, establish the mechanism for determining exclusion subset based on the projection magnitudes, and rigorously account for the false exclusion probabilities in the integrity risk quantification. The feasibility of the algorithm is verified and validated using Monte-Carlo simulations, and the performance is analyzed by evaluating the integrity risk. It is shown that the new FDE scheme can efficiently and effectively exclude the faulty satellites as desired, while achieving promising navigation performance.


Fault detection and exclusion Integrity monitoring Global navigation satellite systems Continuity Parity space 

1 Introduction

The safety critical aviation has stringent requirements on navigation systems. To quantitatively analyze their performance, the International Civil Aviation Organization (ICAO) has defined specific metrics for different navigation methods [1]. Among these metrics, integrity directly relates to operation safety: it measures the trust that can be placed in the correctness of the information supplied by the navigation system. Loss of integrity (LOI) in aviation navigation can result in catastrophic consequences; so, integrity requirement is of the greatest significance during any phase of aircraft flight. In addition to integrity, navigation continuity is another crucial metric: it measures the capability of the system to perform its function without unscheduled interruptions during the intended operation. For the cases where alternative navigation tools are not available, loss of continuity (LOC) can lead the aircraft to be left without means of navigation, which is another severe threat to safety.

After decades of worldwide development, global navigation satellite systems (GNSS) have become the first choice to solve many navigation problems today. This is especially the case in aviation community, because the legacy air navigation capability is limiting the air traffic growth [2]. Given the imperious demand for new technology in aviation, and given its historically consistent and reliable performance, GNSS is expected to significantly improve aviation navigation performance. However, GNSS measurements are vulnerable to faults, including satellite and constellation failures [3], which can potentially lead to major integrity threats to the users. GNSS service can also be interrupted by many sources including false and/or true fault detection (FD), satellite outages, etc. [4, 5, 6]. Such interruptions can significantly impact navigation continuity.

To resolve these issues, a number of research efforts have been put into developing GNSS augmentation techniques, and the outcomes have been serving aviation [7, 8, 9]. In a typical augmentation system, two fundamental capabilities must be enabled: real-time FD test and integrity risk (IR) evaluation. Among those systems, receiver autonomous integrity monitoring (RAIM) has become operational in the mid-1990s as a backup navigation tool to support aircraft en route flight using GPS only [10, 11]. The principle of RAIM is exploiting redundant measurements to achieve self-contained FD at the user receiver [12]. However, due to the limited satellite redundancy from a single constellation, RAIM is only able to provide limited availability, and can only support operations with less stringent navigation requirements.

Future multi-constellation GNSS has been foreseen to provide dramatically increased measurement redundancy. Four constellations including GPS (U.S.), GLONASS (Russia), Galileo (E.U.) and BDS (China) are expected to finish their modernizations and/or full deployments in the near term [13], which will provide many more satellites in view than we have available today using GPS alone. In addition, nominal measurement errors will be significantly reduced using dual-frequency signals, which will remove the largest error source—ionospheric delay. These revolutionary developments in GNSS, together with important advancements in the RAIM concept will open the possibility to independently support aircraft navigation using GNSS, from en route flight towards final approach to landing, with minimal investment in ground infrastructure. Therefore, considerable effort has been expanded, especially in the U.S. and the E.U., to develop ARAIM fault detection and exclusion (FDE) methods to ensure high navigation integrity and continuity [14, 15, 16].

With multiple GNSS constellations available for future ARAIM, the impact of having a large number of available measurements on navigation performance is not entirely obvious. From the integrity perspective, we can naturally expect that increased measurement redundancy will enhance FD capability, and thereby reduce the conditional integrity risk under any given fault hypothesis. But adding satellites also adds more potential fault modes into the navigation function; so, there is a chance that the accumulation of monitored fault modes increases the total (i.e., unconditioned) integrity risk. This is especially concerning given that new emerging constellations may not provide the same (low) levels of nominal ranging error and prior fault probabilities as GPS, especially in the early phases of their deployments. From the continuity perspective, because any reasonable FD algorithm will detect most faults, we can, therefore, also expect the number of detected faults to be larger in multi-constellation systems. This is not a problem for navigation integrity, because the faults are detected, but unless the faults are subsequently identified and excluded, their detection will cause an interruption in the continuity of the navigation operation. Thus, the accumulated likelihood of satellite and constellation faults with multiple constellations could dramatically increase ARAIM continuity risk.

The current ARAIM research activities are led by a joint Working Group (WG) of the U.S. and E.U., i.e., WG-C, and they have been focusing on the dual-constellation scenario using GPS and Galileo [16]. Because ARAIM will operate as a main navigation means, it must provide a higher continuity performance level as compared to traditional RAIM. To reduce the continuity risk caused by FD events, fault exclusion (FE) function needs to be implemented for ARAIM [17, 18, 19]. Therefore, designing a feasible FE scheme to autonomously identify and exclude the faulty space vehicle (SV) is a key research aspect of ARAIM. However, the currently proposed FE methods are all based on the tradeoff between continuity and integrity, which can only improve continuity by a limited amount while degrading the integrity monitoring capability [17]. In addition, those existing FE algorithms require exhaustive searching processes followed by a second layer detection test, which results in significantly high computational load. This is especially the case when more than two constellations are employed, because the number of monitored SV subsets can increase exponentially [20].

In response, this paper proposes an efficient FDE scheme that integrates two separate functions. Even though the new scheme is established here based on ARAIM, it can also be extended to other applications such as ground-based augmentation system (GBAS), satellite-based augmentation system (SBAS) or multi-sensor integrated navigation systems. The principle of this approach is utilizing the relative magnitudes of the parity vector projections to extract the fault information, thereby determining the final exclusion option. Because the projection matrices of each fault mode can be captured in the FD step, the searching step for exclusion candidates and the second layer detection tests are waived. Moreover, using the maximum projection, the exclusion option can always be made after an alert is triggered, so continuity is fully preserved. This is different from implementing a separate FE function, whose resulting continuity risk depends on the threshold setting [17]. The most challenging part of this work is quantifying the IR associated with the proposed scheme. At this early stage, we will focus on single SV fault mode only, and will rigorously account for the false exclusion probabilities in the quantification. The feasibility of the algorithm is verified and validated using Monte-Carlo simulations, and the performance is analyzed by evaluating the IR.

The paper is organized as follows. Section 2 provides the fundamental knowledge of ARAIM FD, and introduces the parity space concept. Then, the new FDE scheme is developed in Sect. 3, where the FDE zones are visually presented in parity space. In addition, a comparison between the new approach and the state-of-the-art FE algorithms is made, and the limitations of the current algorithms are addressed. Section 4 develops the IR evaluation methodology associated with the new scheme, in which the derivations are specified step by step. Later in Sect. 5, multiple analyses on FDE capabilities are carried out to demonstrate the performance of the proposed method. Finally, Sect. 6 concludes this paper.

2 Fundamentals of ARAIM FD

The current ARAIM architecture was proposed by WG-C, and it had been evolving over time. In comparison with RAIM, the most innovative designs of ARAIM are (a) employing the integrity support message (ISM) to provide assertions on the constellation performance [21], and (b) creating a new user algorithm to accommodate the dramatically increased measurement redundancy using multiple GNSS constellations. As the most important outcome of WG-C, the ARAIM baseline multiple hypothesis solution separation (MHSS) user algorithm has been well defined and been widely recognized [15]. This section will take advantage of much of the relevant prior work, and provide detailed and comprehensive derivations of the MHSS, from the fundamental GNSS measurement equation to the final upper bound on IR and false alert (FA). Moreover, the definition of parity space is described, the relationship between the parity vector and solution separation (SS) test statistics is established, and a simple measurement model is employed to visualize the FD process in parity space.

2.1 Definition of SS test statistics

This paper focuses on the ‘snapshot’ ARAIM, which uses Carrier-Smoothed-Code (CSC) measurements to estimate the user potion and clock bias. Let \( n \) and \( m \), respectively, be the numbers of GNSS measurements and states; the measurement equation can be linearized and expressed as [12]:
$$ {\mathbf{z}} = {\mathbf{Hx}} + {\mathbf{v}} + {\mathbf{f}}, $$
where \( {\mathbf{z}} \) is the \( n \times 1 \) measurement vector, \( {\mathbf{H}} \) is the \( n \times m \) observation matrix that is composed of line-of-sight vectors and ones, and \( {\mathbf{x}} \) is the \( m \times 1 \) state vector. \( {\mathbf{v}} \) is the \( n \times 1 \) error vector which can be bounded using a normal distribution \( {\mathbf{v}} \sim N({\mathbf{b}},{\mathbf{V}}) \). \( {\mathbf{f}} \) is the \( n \times 1 \) fault vector, where the elements are zeros if their corresponding measurements are fault-free (FF).
Using a least-squares (LS) estimator, the state of interest in Eq. (1) can be estimated and extracted as:
$$ \hat{x}_{0} {\mathbf{ = }}{\varvec{\upalpha}}_{r} {\mathbf{S}}_{0} {\mathbf{z}}\quad {\text{where}}\quad {\mathbf{S}}_{0} = {\mathbf{P}}_{0} {\mathbf{H}}^{\text{T}} {\mathbf{V}}^{ - 1} ,\quad {\text{and}}\quad {\mathbf{P}}_{0} = \left( {{\mathbf{H}}^{\text{T}} {\mathbf{V}}^{ - 1} {\mathbf{H}}} \right)^{ - 1} . $$

In Eq. (2), \( {\mathbf{S}}_{0} \) is defined as system matrix, and \( {\mathbf{P}}_{0} \) is the covariance matrix of the full state estimate \( {\hat{\mathbf{x}}}_{0} \). \( {\varvec{\upalpha}}_{r} \) is a \( 1 \times m \) vector with the subscript ‘r’ identifying the rth element of \( {\hat{\mathbf{x}}}_{0} \). For example, \( r = 3 \) corresponds to extracting the vertical component of the position estimate. If two constellations are employed, then: \( \alpha_{3} = \left[ {\begin{array}{*{20}c} 0 & 0 & {\begin{array}{*{20}c} 1 & 0 & 0 \\ \end{array} } \\ \end{array} } \right] \).

The SS test statistics \( \Delta_{d} \) are defined in position domain, which are the differences between the full-set position solution \( \hat{x}_{0} \) and the subset solutions \( \hat{x}_{d} \) [12, 15]. Using similar notations as our previous work [19, 22], the normalized statistics can be expressed as:
$$ q_{d} = \frac{{\hat{x}_{0} - \hat{x}_{d} }}{{\sigma_{{\Delta_{d} }} }} = \frac{{\varepsilon_{0} - \varepsilon_{d} }}{{\sigma_{{\Delta_{d} }} }},\quad {\text{for}}\quad d = 1\ldots n, $$
where the subscript d indexes the number of detection test statistics from 1 … n. Because the purpose of this work is to present the general idea, only single SV fault mode is considered throughout this paper. Therefore, the number of monitored fault modes is equivalent to the visible SV number n. \( \hat{x}_{d} \) is the position estimate using satellites without the one in fault hypothesis d. The evaluation of \( \hat{x}_{d} \) takes similar form as Eq. (2), i.e., \( \hat{x}_{d} = {\varvec{\upalpha}}_{r} {\mathbf{S}}_{d} {\mathbf{z}} \), except the elements associated with the fault mode are set to be 0 in the new system matrix \( {\mathbf{S}}_{d} \) [22]. \( \varepsilon_{0} \) and \( \varepsilon_{d} \) are, respectively, the position estimate errors of \( \hat{x}_{0} \) and \( \hat{x}_{d} \), i.e., \( \varepsilon_{0} = \hat{x}_{0} - x \) and \( \varepsilon_{d} = \hat{x}_{d} - x \), where \( x \) is the true position of the user. [22] also proved that all the three variables of \( \Delta_{d} \), \( \varepsilon_{0} \), and \( \varepsilon_{d} \) follow normal distributions. In this paper, their bounding biases are, respectively, noted as \( \mu_{{\Delta_{d} }} \), \( \mu_{0} \), \( \mu_{d} \), and their corresponding standard deviations are noted as \( \sigma_{{\Delta_{d} }} \), \( \sigma_{0} \), \( \sigma_{d} \).In the detection step, the statistics in Eq. (3) are compared with their corresponding thresholds \( T_{d} \), which are derived in the next subsection to achieve an allocated FA budget. If any of the statistics exceeds its threshold, i.e., if \( \mathop {\bigcup }\nolimits_{d = 1}^{n} \left| {q_{d} } \right| > T_{d} \), then an alert is issued, indicating that a fault may be present: this event is labeled \( D_{0} \). Otherwise, if all test statistics are smaller than the thresholds, i.e., if \( \mathop {\bigcap }\nolimits_{d = 1}^{n} \left| {q_{d} } \right| < T_{d} \), then there is no detection (event \( \bar{D}_{0} \)), and the operation continues.

2.2 Evaluation of FA probability and IR

For ARAIM FD only, the probability of FA (\( P_{\text{FA}} \)) is the major contribution to the overall continuity risk, or probability of LOC (\( P_{\text{LOC}} \)). And it can be expressed as [19]:
$$ P_{\text{FA}} = P\left( {\left. {D_{0} } \right|H_{0} } \right)P_{{H_{0} }} = P\left( {\left. {\bigcup\limits_{d = 1}^{n} {\left| {q_{d} } \right| > T_{d} } } \right|H_{0} } \right)P_{{H_{0} }} < \sum\limits_{d = 1}^{n} {P\left( {\left. {\left| {q_{d} } \right| > T_{d} } \right|H_{0} } \right)} P_{{H_{0} }} , $$
where \( P_{{H_{0} }} \) is the probability of the FF hypothesis. To avoid confusion, it is worth clarifying that the subscript ‘0’ of \( H_{0} \) indicates the FF state, whereas the ‘0’ of \( \hat{x}_{0} \), \( \varepsilon_{0} \), \( D_{0} \) and \( \bar{D}_{0} \) represent the use of all-in-view satellites. Similar to most detection problems, the detection thresholds of ARAIM are determined by limiting \( P_{\text{FA}} \). Let \( P_{{{\text{FA}},{\text{REQ}}}} \) be the FA requirement allocated from the overall continuity risk budget \( C_{\text{REQ}} \). To meet \( P_{\text{FA}} < P_{{{\text{FA}},{\text{REQ}}}} \), the FD thresholds can be computed as: \( T_{d} = Q^{ - 1} \left\{ {{\raise0.7ex\hbox{${P_{\text{FA,REQ}} }$} \!\mathord{\left/ {\vphantom {{P_{\text{FA,REQ}} } {2 P_{{H_{0} }} \cdot h}}}\right.\kern-0pt} \!\lower0.7ex\hbox{${2 P_{{H_{0} }} \cdot h}$}}} \right\} \), where \( Q^{ - 1} \) is the inverse tail probability function.
Integrity is usually measured in terms of IR, which is the probability that an undetected navigation system error results in Hazardous Misleading Information (HMI). Using FD only, no integrity threat other than missed detection could affect the system. So, the IR of FD only is a joint probability of having a hazard and sending no alert (\( \bar{D}_{0} \)), which can be written as and bounded by [12]:
$$ {\text{IR}}_{\text{FD}} = P\left( {{\text{HI}}_{0} ,\bar{D}_{0} } \right) = \sum\limits_{i = 0}^{n} {P\left( {\left. {\left| {\varepsilon_{0} } \right| > \ell ,\bigcap\limits_{d = 1}^{n} {\left| {q_{d} } \right| < T_{d} } } \right|H_{i} } \right)P_{{H_{i} }} } $$
$$ < P\left( {\left. {\left| {\varepsilon_{0} } \right| > \ell } \right|H_{0} } \right)P_{{H_{0} }} + \sum\limits_{i = 1}^{n} {P\left( {\left. {\left| {\varepsilon_{i} } \right| + T_{i} \sigma_{{\Delta_{i} }} > \ell } \right|H_{i} } \right)P_{{H_{i} }} } . $$

In Eq. (5), \( {\text{HI}}_{0} \) represents the events of hazardous information existing in the full-set solution, i.e., \( \left| {\varepsilon_{0} } \right| > l \), where \( \ell \) is the alert limit (AL). \( H_{i} \) accounts for FF condition (i = 0) and all the single SV fault hypotheses for i = 1, … n, and their prior probabilities are denoted as \( P_{{H_{i} }} \).

2.3 SS test statistics in parity space

The parity space representation is the most illustrative expression of the detection process using measurement redundancy. It had been introduced for residual-based (RB) RAIM in [3, 12]. To get the parity vector, Eq. (1) is first normalized by pre-multiplying \( {\mathbf{V}}^{{ - \frac{1}{2}}} \). Then, the normalized measurement vector, observation matrix, noise vector and fault vector, respectively, become: \( {\mathbf{z}}^{ *} = {\mathbf{V}}^{{ - \frac{1}{2}}} {\mathbf{z}} \), \( {\mathbf{H}}^{ *} = {\mathbf{V}}^{{ - \frac{1}{2}}} {\mathbf{H}} \), \( {\mathbf{v}}^{ *} = {\mathbf{V}}^{{ - \frac{1}{2}}} {\mathbf{v}} \) and \( {\mathbf{f}}^{ *} = {\mathbf{V}}^{{ - \frac{1}{2}}} {\mathbf{f}} \). The \( \left( {n - m} \right) \times n \) parity matrix Q is obtained by taking the singular value decomposition (SVD) of \( {\mathbf{H}}^{ *} \) [23]. Let the following equation to be the SVD result:
$$ {\mathbf{H}}_{n \times m}^{ * } = {\mathbf{U}}_{n \times n} \left[ {\begin{array}{*{20}l} {{\mathbf{S}}_{m \times m} } \hfill \\ {{\mathbf{0}}_{(n - m) \times m} } \hfill \\ \end{array} } \right]{\mathbf{V}}_{m \times m}^{\text{T}} ,\quad {\text{where}}\quad {\mathbf{U}}_{n \times n} = \left[ {\begin{array}{*{20}l} {{\mathbf{U}}_{1,n \times m} } \hfill & {{\mathbf{U}}_{2,n \times (n - m)} } \hfill \\ \end{array} } \right]. $$
Defining \( {\mathbf{U}}_{2}^{\text{T}} \) as the parity matrix \( {\mathbf{Q}} \), then the \( \left( {n - m} \right) \times 1 \) parity vector \( {\mathbf{p}} \) is []:
$$ {\mathbf{p}} = {\mathbf{Qz}}^{ * } = {\mathbf{Q}}({\mathbf{v}}^{ * } + {\mathbf{f}}^{ * } ). $$
Moreover, [12] has proved the following relationships:
$$ {\mathbf{QH}}^{ * } = {\mathbf{0}}_{(n - m) \times m} ,\quad {\mathbf{QQ}}^{\text{T}} = {\mathbf{I}}_{(n - m)} \quad {\text{and}}\quad {\mathbf{Q}}^{\text{T}} {\mathbf{Q}} = {\mathbf{I}}_{n} - {\mathbf{H}}^{ * } {\mathbf{S}}_{0}^{ * } , $$
where \( {\mathbf{S}}_{0}^{*} {\mathbf{V}}^{{ - \frac{1}{2}}} = {\mathbf{S}}_{0} \), and \( {\mathbf{I}}_{n} \) is a \( n \times n \) identical matrix.Given that \( \Delta_{i} = {\varvec{\upalpha}}_{r} \left( {{\mathbf{S}}_{i}^{*} {\mathbf{H}}^{*} {\mathbf{S}}_{0}^{*} - {\mathbf{S}}_{i}^{*} } \right){\mathbf{z}}^{*} \) and \( {\mathbf{S}}_{0}^{*} = {\mathbf{S}}_{i}^{*} {\mathbf{H}}^{*} {\mathbf{S}}_{0}^{*} \) [22], \( \Delta_{i} \) can be expressed as:
$$ \Delta_{i} = {\varvec{\upalpha}}_{r} \left( {{\mathbf{S}}_{i}^{ * } {\mathbf{H}}^{ * } {\mathbf{S}}_{0}^{ * } - {\mathbf{S}}_{i}^{ * } } \right){\mathbf{z}}^{ * } = - {\varvec{\upalpha}}_{r} {\mathbf{S}}_{i}^{ * } {\mathbf{Q}}^{\text{T}} {\mathbf{Qz}}^{ * } = - {\varvec{\upalpha}}_{r} {\mathbf{S}}_{i}^{ * } {\mathbf{Q}}^{\text{T}} {\mathbf{p}}. $$
The standard deviation of \( \Delta_{i} \) is equivalent to:
$$ \sigma_{{\Delta_{i} }} = \sqrt {{\varvec{\upalpha}}_{r} \left( {{\mathbf{S}}_{i}^{ * } {\mathbf{S}}_{i}^{{ * {\text{T}}}} - {\mathbf{S}}_{i}^{ * } {\mathbf{H}}^{ * } {\mathbf{S}}_{0}^{{ * {\text{T}}}} {\mathbf{S}}_{i}^{{ * {\text{T}}}} } \right){\varvec{\upalpha}}_{r}^{\text{T}} } = \sqrt {{\varvec{\upalpha}}_{r} {\mathbf{S}}_{i}^{ * } {\mathbf{Q}}^{\text{T}} {\mathbf{QS}}_{i}^{{*{\text{T}}}} {\varvec{\upalpha}}_{r}^{\text{T}} } . $$
Therefore, the relationship between \( q_{i} \) and \( {\mathbf{p}} \) can be established:
$$ q_{i} = {\mathbf{w}}_{i} {\mathbf{p}},\quad {\text{where}}\quad {\mathbf{w}}_{i} = \frac{{ - {\varvec{\upalpha}}_{r} {\mathbf{S}}_{i}^{ * } {\mathbf{Q}}^{\text{T}} }}{{\sqrt {{\varvec{\upalpha}}_{r} {\mathbf{S}}_{i}^{ * } {\mathbf{Q}}^{\text{T}} \left( {{\varvec{\upalpha}}_{r} {\mathbf{S}}_{i}^{ * } {\mathbf{Q}}^{\text{T}} } \right)^{\text{T}} } }}. $$

In this work, \( {\mathbf{w}}_{i} \) is defined as “fault mode line” for \( H_{i} \). And the projection of the parity vector on this line is the corresponding normalized SS test statistic \( q_{i} \).

To visualize SS-based FD in parity space, a simple measurement model is employed here. The observation matrix and the error model are, respectively, \( {\mathbf{H}} = \left[ {\begin{array}{*{20}c} 1 & 1 & 1 \\ \end{array} } \right]^{\text{T}} \) and \( {\mathbf{v}}\sim N\left( {0_{3 \times 1} , {\mathbf{I}}_{3} } \right) \). Three fault modes i = 1, 2, 3, corresponding to each measurement, are considered in this problem. Since there are two redundant measurements, this example can be easily demonstrated in a 2D parity space.

Figure 1 presents ARAIM FD process under fault hypothesis \( H_{1} \), i.e., the fault is on line \( {\mathbf{w}}_{1} \) (highlighted red). The yellow arrow represents the parity vector, and its projections onto three fault lines are the associated normalized SS test statistics, i.e., Eq. (12). The blue region on the right indicates no detection event \( \bar{D}_{0} \), in which all the statistics are less than the thresholds. It has hexagon shape because the magnitudes of the thresholds are equal for all three \( q_{i} \). In this example case, the parity vector lies outside of the no detection region; so there is a detection event.
Fig. 1

Parity space representation of ARAIM FD

3 Development of the new FDE scheme

So far, we have described the ARAIM FD process, which only addresses the impact of FA events on continuity. In fact, because ARAIM will take use of multiple constellations, the heightened likelihood of encountering true FD events can significantly increase continuity risk. Therefore, to improve ARAIM continuity, an exclusion step must be implemented after FD, especially given that ARAIM will operate as a main navigation means [19].

3.1 Two-step-based FE function designs

Unlike the MHSS FD algorithm that is generally accepted, the currently proposed FE algorithms are mostly heuristic. Figure 2 shows the flow diagram of a typical FDE procedure, which is composed of three major steps including IR evaluation (labeled blue), FD function implementation (labeled green) and FE function implementation (labeled red). As the precondition of the whole process, the overall IR of the FDE function (\( {\text{IR}}_{\text{FDE}} \)) is computed and compared with the requirement \( I_{\text{REQ}} \) at first. The receiver will proceed to the remaining steps only if \( {\text{IR}}_{\text{FDE}} < I_{\text{REQ}} \). For continuity, the key design element is the exclusion step after detection, and the mechanism to determine which SV to exclude. In most of the current designs, the FE function consists of two steps: exclusion option order determination and final decision-making. Figure 2 has listed three ways to array the exclusion candidates [17, 18, 19], whose goal is to develop an efficient route for the exclusion attempt. To make the final exclusion choice, a second layer detection test is employed to confirm the new satellite subset after exclusion is FF [17]. Following the order made in first step, multiple exclusion tests will be implemented from j = 1 to n. And the final decision (noted as \( E_{j} \)) is made only if there is no second layer detection event after excluding SV j.
Fig. 2

Flow diagram of a state-of-the-art FDE process

Although the two-step-based FE function design has been widely proposed and discussed, it has several major disadvantages. First, its computational cost is extremely high, which may not be feasible for on-board ARAIM user receiver. This is due to the fact that carrying out the exclusion test requires reevaluating the position estimate and statistics using the second layer SV subsets, which doubles the computational load than only implementing a single FD function. In addition, it may take a number of iterations before the final exclusion decision can be made, and each iteration corresponds to going through one more FD process. Under the worst-case scenario when no exclusion (NE) can be validated after testing all the candidates, the system will output insignificant information, while consuming a large amount of computation power. Given that the complexity of the MHSS FD algorithm has already caused issues [20, 24, 25], it is highly undesirable to add any additional computational load to the user. Other than the concern on computation, another major problem of the current FE function design is due to the tradeoff between continuity and integrity. According to the algorithm description [17], the continuity improvement is highly dependent on the threshold setting of the second layer test statistics. Although the continuity risk can be reduced by employing a larger threshold, it will also lead to a higher IR. For the cases when the continuity requirement is stringent, the resulting \( {\text{IR}}_{\text{FDE}} \) may exceed \( I_{\text{REQ}} \).

3.2 Real-time implementation of the integrated FDE scheme

To overcome the shortcomings of the existing algorithms, our proposed method unifies the FD and FE functions into one process. As shown in Fig. 3, the real-time implementation of the integrated FDE scheme is greatly simplified from the one in Fig. 2. Using the new approach, the final exclusion decision can be directly made after an alert is triggered. Because the second layer FD test and the iteration steps are removed, the algorithm efficiency is dramatically improved. The basis for determining the excluded SV subset is utilizing the properties of the party vector projections in parity space. Under a single SV faulted condition, the mean of the party vector is along the fault mode line, and the deviation of the parity vector from this line is only impacted by the noise from other FF measurements. Therefore, the projection on the actual fault mode is expected to be the maximum, and that is why its corresponding SV subset is chosen to be excluded. Because the maximum projection can always be found, the exclusion attempt will never fail, which means that the operation continuity can be fully preserved after a FD event occurs. According to our derivations in prior section, the projections of single SV fault modes are equivalent to their normalized SS test statistics, so \( q_{i} \) can be directly used to make the exclusion decision. This is captured by the solid purple arrow in Fig. 3.
Fig. 3

Flow diagram of the integrated FDE scheme

It is noteworthy that the second layer detection test is no longer performed using the new approach, which may raise the question whether the resulting position estimate after exclusion can still be trusted or not. The principle for not adopting a double check is that the new exclusion scheme is expected to exclude the faulted measurement, which should generate a FF position estimate. Similar to all the exclusion algorithms, it is true that a FF subset can be wrongly excluded using this approach, but the operation safety is still ensured as long as the probability of having this event is rigorously accounted in the overall integrity risk quantification, i.e., \( {\text{IR}}_{\text{FDE}} \). As shown in Fig. 3, a priori integrity check is employed, and the mission will be executed only if \( {\text{IR}}_{\text{FDE}} < I_{\text{REQ}} \). Therefore, the general \( {\text{IR}}_{\text{FDE}} \) should capture all the possible events that the user may encounter, including wrong exclusion (WE). Evaluating \( {\text{IR}}_{\text{FDE}} \) is a key aspect of this work, and the details will be addressed in next section.

Using the example introduced in prior section, the FDE procedures are visually presented in parity space in Fig. 4. Because measurement 1 is faulted, the parity vector p will move along \( {\mathbf{w}}_{1} \) as the fault magnitude (i.e., \( f_{1} \)) varies, with small deviations orthogonal to \( {\mathbf{w}}_{1} \) due to nominal noise on measurements 2 and 3. The left figure corresponds to the conventional FDE algorithm, where the exclusion zones are distinguished by multiple colors. The green band represents the correct exclusion (CE) event in which measurement 1 will be excluded. The red band results in WE since measurement 2 and 3 will be excluded. The overlapping regions are labeled blue and purple, in which more than one exclusion options will pass the second layer detection test. Finally, the white areas capture the cases that NE can be made after testing all the exclusion options (\( \bar{E} \)). For this approach, the widths of those bands are set by the magnitudes of the exclusion threshold \( T_{e,l} \), and the basis for determining the parity vector’s location is by comparing the second layer test statistics \( q_{e,l} \) with \( T_{e,l} \) [17, 19]. Therefore, Fig. 5 (left) restates the fact that the computational cost to support the conventional FDE approach is significantly high because (a) it requires computing \( q_{e,l} \) and \( T_{e,l} \), and (b) it requires an iterative search to make the final exclusion decision. In addition, even if \( {\mathbf{p}} \) locates in the white regions, the algorithm will still consume the same amount of computational power. And if \( {\mathbf{p}} \) is in the blue or purple area, the probability of having a WE event is increased, which is highly undesirable for the users. In comparison to the conventional approach, the figure on the right presents our proposed integrated FDE scheme in parity space, where the CE region is labeled green and WE region is labeled red. Because the regions are distinguished by the magnitudes of \( q_{i} \), their borders lie in the middle of two fault mode lines. Using the new approach, there is no overlapping regions nor NE regions, which restates the fact that exclusion decision can be immediately made by only using the information from the FD step, and continuity can be fully preserved.
Fig. 4

Parity space representation of existing (left) and new (right) FDE approaches

Fig. 5

Graphical illustration of the upper bound derivation

4 IR quantification of the new FDE scheme

As pointed out in introduction section, IR evaluation is the key component of any GNSS augmentation system. It is also the most challenging part of this work, because the evaluation highly depends on how the FDE steps are implemented. As shown in the flow diagram of Fig. 3, a priori IR evaluation is adopted in this approach. When computing the IR in real time, the receiver does not know whether there is a FD or not, and which SV subset needs to be excluded. Therefore, all possible situations that cause integrity threats need to be characterized. As a result, the instantaneous \( {\text{IR}}_{\text{FDE}} \) is identical to the predictive \( {\text{IR}}_{\text{FDE}} \), which is usually evaluated for offline analyses purposes, and both need to account for the risks introduced by the exclusion options [17, 19, 22]:
$$ {\text{IR}}_{\text{FDE}} = P\left( {{\text{HI}}_{0} ,\bar{D}_{0} } \right) + \sum\limits_{j = 1}^{n} {P\left( {{\text{HI}}_{j} ,D_{0} ,E_{j} } \right)} , $$
where \( E_{j} \) denotes SV j is excluded, and \( {\text{HI}}_{j} \) indicates hazardous misleading information still exist even if the user position is estimated without using SV j, i.e., \( \left| {\varepsilon_{j} } \right| > l \). According to our new FDE scheme design, the SV that results in largest \( q_{i} \) will be chosen to be excluded. Therefore, j corresponds to the maximum normalized detection statistic (\( {\text{MAX}}_{j} \)). In the following derivations, \( E_{j} \) will be replaced by \( {\text{MAX}}_{j} \), which takes the following mathematical form: \( \left| {q_{j} } \right| > \mathop {\bigcap }\limits_{d = 1}^{n} \left| {q_{d} } \right| \). With multiple fault hypotheses, Eq. (13) can be expressed and bounded as:
$$ {\text{IR}}_{\text{FDE}} = \sum\limits_{i = 0}^{n} {\mathop {\hbox{max} }\limits_{{f_{i} }} \left( {P\left( {\left. {{\text{HI}}_{0} ,\bar{D}_{0} } \right|H_{i} ,f_{i} } \right) + \sum\limits_{j = 1}^{n} {P\left( {\left. {{\text{HI}}_{j} ,D_{0} ,{\text{MAX}}_{j} } \right|H_{i} ,f_{i} } \right)} } \right)P_{{H_{i} }} } $$
$$ < \sum\limits_{i = 0}^{n} {P\left( {\left. {{\text{HI}}_{0} ,\bar{D}_{0} } \right|H_{i} } \right)P_{{H_{i} }} } + \sum\limits_{i = 0}^{n} {\sum\limits_{j = 1}^{n} {P\left( {\left. {{\text{HI}}_{j} ,D_{0} ,{\text{MAX}}_{j} } \right|H_{i} } \right)} P_{{H_{i} }} } . $$

In Eq. (14), the actual fault vector \( f_{i} \) of hypothesis \( H_{i} \) can be fully characterized by its direction and magnitude [12]. The worst-case fault is obtained when the conditional \( {\text{IR}}_{\text{FDE}} \) for \( H_{i} \) (the summation over all exclusion options under hypothesis \( H_{i} \)) is maximized. However, directly evaluating IR using Eq. (14) is almost impossible because (a) searching for the worst-case fault over all exclusion options is an arduous task, and (b) the correlations between the events in each exclusion option are complex. Therefore, in the upper bound of Eq. (15), we have implicitly selected \( f_{i} \) to maximize each term individually. In contrast, in Eq. (14), a single fault mode is selected to maximize the sum of all the terms. Therefore, summing the maximized individual risks using Eq. (15) always bounds the maximized summed risk in Eq. (14).

The first term of Eq. (15) is \( {\text{IR}}_{\text{FD}} \), whose evaluation has been given in Eq. (6). Our attention is now turned to the second (double summation) term, which carries the cost of increased IR by implementing exclusion. We first define \( {\text{IR}}_{{{\text{FDE}},i,j}} \) as the IR contribution after excluding \( E_{j} \) under hypothesis \( H_{i} \), and \( {\text{IR}}_{{{\text{FDE}},i,j}} \) is classified into three categories: FF condition (\( i = 0 \)), CE (\( i = j \)), and WE (\( i \ne j \)). For FF hypothesis and CE event, because the resulting satellite subset after exclusion is FF, the position estimation error \( \varepsilon_{j} \) is expected to be significantly smaller than AL. Therefore, \( {\text{IR}}_{{{\text{FDE}},0,j}} \) and \( {\text{IR}}_{{\frac{{{\text{FDE}},i,j}}{i = j}}} \) can still be tightly bounded after eliminating all other information:
$$ {\text{IR}}_{{{\text{FDE}},0,j}} = P\left( {\left. {{\text{HI}}_{j} ,D_{0} ,{\text{MAX}}_{j} } \right|H_{0} } \right)P_{{H_{0} }} < P\left( {\left. {{\text{HI}}_{j} } \right|H_{0} } \right)P_{{H_{0} }} = P\left( {\left. {\left| {\varepsilon_{j} } \right| > \ell } \right|H_{0} } \right)P_{{H_{0} }} $$
$$ {\text{IR}}_{{\frac{{{\text{FDE}},i,j}}{i = j}}} = P\left( {\left. {{\text{HI}}_{j} ,D_{0} ,{\text{MAX}}_{j} } \right|H_{i} } \right)P_{{H_{i} }} < P\left( {\left. {{\text{HI}}_{j} } \right|H_{i} } \right)P_{{H_{i} }} = P\left( {\left. {\left| {\varepsilon_{j} } \right| > \ell } \right|H_{i} } \right)P_{{H_{i} }} . $$
To evaluate the IR associated with WE event, we employ the following bound:
$$ {\text{IR}}_{{\mathop {{\text{FDE}},i,j}\limits_{i \ne j} }} = P\left( {\left. {{\text{HI}}_{j} ,D_{0} ,{\text{MAX}}_{j} } \right|H_{i} } \right)P_{{H_{i} }} = P\left( {\left. {\left| {\varepsilon_{j} } \right| > \ell ,\bigcup\limits_{d = 1}^{n} {\left| {q_{d} } \right| > T_{d} ,\left| {q_{j} } \right| > \bigcap\limits_{{\mathop {d = 1}\limits_{j \ne d} }}^{n} {\left| {q_{d} } \right|} } } \right|H_{i} } \right)P_{{H_{i} }} $$
$$ < P\left( {\left. {\left| {\varepsilon_{j} } \right| > \ell ,\left| {q_{j} } \right| > \left| {q_{i} } \right|} \right|H_{i} } \right)P_{{H_{i} }} . $$
In Eq. (19), the information of \( D_{0} \) is ignored, and only the statistic of the actual fault mode is accounted. This is due to the fact that \( q_{i} \) reflects the difference between a faulted position estimate and a FF estimate, which is expected to be the driving factor of the joint probability of Eq. (18). According to our derivations in prior section, all of the three variables (\( \varepsilon_{j} , q_{j} , q_{i} \)) follow normal distributions with known mean values and standard deviations:
$$ \varepsilon_{j} \sim N\left( {\mu_{j} ,\sigma_{j}^{2} } \right),\quad {\text{where}}\quad \mu_{j} = {\varvec{\upalpha}}_{r} {\mathbf{S}}_{j} \left( {{\mathbf{b}} + {\mathbf{f}}} \right)\quad {\text{and}}\quad \sigma_{j}^{2} = {\varvec{\upalpha}}_{r} {\mathbf{P}}_{j} {\varvec{\upalpha}}_{r}^{\text{T}} $$
$$ \begin{aligned} q_{j} &\sim N\left( {\mu_{{q_{j} }} ,1} \right) ,\quad {\text{where}}\quad \mu_{{q_{j} }}\\ & = {{{\varvec{\upalpha}}_{r} \left( {{\mathbf{S}}_{0} - {\mathbf{S}}_{j} } \right)\left( {{\mathbf{b}} + {\mathbf{f}}} \right)} \mathord{\left/ {\vphantom {{{\varvec{\upalpha}}_{r} \left( {{\mathbf{S}}_{0} - {\mathbf{S}}_{j} } \right)\left( {{\mathbf{b}} + {\mathbf{f}}} \right)} {\sigma_{{\Delta_{j} }} }}} \right. \kern-0pt} {\sigma_{{\Delta_{j} }} }}\quad {\text{and}}\quad \sigma_{{\Delta_{j} }}^{2} = {\varvec{\upalpha}}_{r} \left( {{\mathbf{P}}_{j} - {\mathbf{P}}_{0} } \right){\varvec{\upalpha}}_{r}^{\text{T}} \end{aligned}$$
$$ \begin{aligned} q_{i} &\sim N\left( {\mu_{{q_{i} }} ,1} \right) ,\quad {\text{where}}\quad \mu_{{q_{i} }} \\ & = {{{\varvec{\upalpha}}_{r} {\mathbf{S}}_{0} \left( {{\mathbf{b}} + {\mathbf{f}}} \right)} \mathord{\left/ {\vphantom {{{\varvec{\upalpha}}_{r} {\mathbf{S}}_{0} \left( {{\mathbf{b}} + {\mathbf{f}}} \right)} {\sigma_{{\Delta_{i} }} }}} \right. \kern-0pt} {\sigma_{{\Delta_{i} }} }}\quad {\text{and}}\quad \sigma_{{\Delta_{i} }}^{2} = {\varvec{\upalpha}}_{r} \left( {{\mathbf{P}}_{i} - {\mathbf{P}}_{0} } \right){\varvec{\upalpha}}_{r}^{\text{T}} . \end{aligned}$$

Due to the term of \( \left| {q_{j} } \right| > \left| {q_{i} } \right| \), it is challenging to directly evaluate Eq. (19). Instead, we employ an upper bound, which converts Eq. (19) into a multivariate normal distribution problem. Let \( {\mathbf{q}} \) define as \( \left[ {q_{j} q_{i} } \right]^{\text{T}} \), so \( \varvec{q} \sim N\left( {{\varvec{\upmu}}_{q} , {\varvec{\Sigma}}_{q} } \right) \), where \( {\varvec{\upmu}}_{q} = \left[ {\mu_{{q_{j} }} \mu_{{q_{i} }} } \right]^{\text{T}} \), and the covariance matrix is \( {\varvec{\Sigma}}_{q} = \left[ {\begin{array}{*{20}l} 1 \hfill & {\sigma_{{_{{q_{j} q_{i} }} }}^{2} } \hfill \\ {\sigma_{{_{{q_{j} q_{i} }} }}^{2} } \hfill & 1 \hfill \\ \end{array} } \right] \). Then, using the leftmost figure of Fig. 5, the scenario of \( \left| {q_{j} } \right| > \left| {q_{i} } \right| \) can be visually illustrated in terms of q. The red line represents the mean values of \( {\mathbf{q}} \) that change as a function of fault magnitude. The impact of the nominal bias b is not addressed, so the red line passes the origin when the fault magnitude is 0. As a result, the probability of \( \left| {q_{j} } \right| > \left| {q_{i} } \right| \) is equivalent to the probability of q locating in the blue regions of the leftmost figure. As shown in the dashed box of Fig. 5, the principle of bounding Eq. (19) is independently evaluating the two blue areas. Let two new unit vector define as \( {\mathbf{n}}_{1} = \left[ { - \frac{\sqrt 2 }{2},\varvec{ }\frac{\sqrt 2 }{2}} \right] \) and \( {\mathbf{n}}_{2} = \left[ {\frac{\sqrt 2 }{2},\varvec{ }\frac{\sqrt 2 }{2}} \right] \), then the new variables in the two subfigures in the dashed box are, respectively, \( R_{1} = {\mathbf{n}}_{1} {\mathbf{q}} \) and \( R_{2} = {\mathbf{n}}_{2} {\mathbf{q}} \), where \( R_{1} \sim N\left( {\mu_{{R_{1} }} = {\mathbf{n}}_{1} {\varvec{\upmu}}_{q} , \sigma_{{R_{1} }}^{2} = {\mathbf{n}}_{1} {\varvec{\Sigma}}_{q} {\mathbf{n}}_{1}^{\text{T}} } \right) \) and \( R_{2} \sim N\left( {\mu_{{R_{2} }} = {\mathbf{n}}_{2} {\varvec{\upmu}}_{q} , \sigma_{{R_{2} }}^{2} = {\mathbf{n}}_{2} {\varvec{\Sigma}}_{q} {\mathbf{n}}_{2}^{\text{T}} } \right) \). The deviation between the red line and the origin captures the worst-case impact of nominal bias b on the probability.

Using the newly defined variables, the final upper bound of the IR associated with WE event is:
$$ \begin{aligned}{\text{IR}}_{{\mathop {{\text{FDE}},i,j}\limits_{i \ne j} }} &< \left( P\left( {\left. {\left| {\varepsilon_{j} } \right| > \ell ,R_{1} < 0,} \right|H_{i} ,f_{i} } \right) \right.\\ &\left.+ P\left( {\left. {\left| {\varepsilon_{j} } \right| > \ell ,R_{2} < 0,} \right|H_{i} ,f_{i} } \right) \right)P_{{H_{i} }} . \end{aligned}$$
Given the fault direction of each single SV fault mode, Eq. (23) can be computed by searching for the worst-case fault magnitude that maximizes \( {\text{IR}}_{{\frac{{{\text{FDE}},i,j}}{i \ne j}}} \). The correlations among \( \varepsilon_{j} \), \( R_{1} \) and \( R_{2} \) are derived in “Appendix”, and the covariance matrices of the two equations are:
$$ {\varvec{\Sigma}}_{1} = \left[ {\begin{array}{*{20}c} {\sigma_{j}^{2} } & {\sigma_{{j\,R_{1} }}^{2} } \\ {\sigma_{{j\,R_{1} }}^{2} } & {\sigma_{{R_{1} }}^{2} } \\ \end{array} } \right] \quad {\text{and}}\quad {\varvec{\Sigma}}_{2} = \left[ {\begin{array}{*{20}c} {\sigma_{j}^{2} } & {\sigma_{{j\,R_{2} }}^{2} } \\ {\sigma_{{j\,R_{2} }}^{2} } & {\sigma_{{R_{2} }}^{2} } \\ \end{array} } \right]. $$
As a result, the IR of the proposed integrated FDE scheme can be evaluated by plugging Eqs. (6), (16), (17) and (23) into Eq. (15), and its final expression is:
$$ \begin{aligned} &{\text{IR}}_{\text{FDE}} < P\left( {\left. {\left| {\varepsilon_{0} } \right| > \ell } \right|H_{0} } \right)P_{{H_{0} }} + \sum\limits_{i = 1}^{n} {P\left( {\left. {\left| {\varepsilon_{i} } \right| + T_{i} \sigma_{{\Delta_{i} }} > \ell } \right|H_{i} } \right)P_{{H_{i} }} } \hfill \\ &\quad + \sum\limits_{j = 1}^{n} {\left( \begin{aligned} &P\left( {\left. {\left| {\varepsilon_{j} } \right| > \ell } \right|H_{0} } \right)P_{{H_{0} }} + \sum\nolimits_{{\mathop 1\limits_{i = j} }}^{n} {P\left( {\left. {\left| {\varepsilon_{j} } \right| > \ell } \right|H_{i} } \right)P_{{H_{i} }} } \hfill \\ &+ \sum\nolimits_{{\mathop 1\limits_{i \ne j} }}^{n} {\left( {P\left( {\left. {\left| {\varepsilon_{j} } \right| > \ell ,R_{1} < 0,} \right|H_{i} ,f_{i} } \right) + P\left( {\left. {\left| {\varepsilon_{j} } \right| > \ell ,R_{2} < 0,} \right|H_{i} ,f_{i} } \right)} \right)P_{{H_{i} }} } \hfill \\ \end{aligned} \right)} . \hfill \\ \end{aligned} $$

5 Results

With the theoretical methods being fully derived in prior sections, this section investigates the performance of the new FDE scheme. The analyses are, respectively, carried out from the perspective of computational efficiency, algorithm effectiveness and integrity. To clearly present the benefits of this approach, the new results are directly compared to the ones obtained using existing FDE methods.

Because evaluating the SS test statistics requires estimating the position solutions using SV subset, the ARAIM computational load can be reduced by reducing the number of monitored satellite subset [20, 24, 25]. Table 1 shows the SV subset numbers in terms of two, three and four constellations. It is conservatively assumed that the visible SV number from a single constellation is 9 for GPS and BDS; 8 for Galileo and GLONASS. Depending on how the exclusion option order is arrayed, the subset number using the conventional FDE approach is in a range. The results suggest that using the proposed FDE scheme can significantly improve the computational efficiency, especially when multiple exclusion attempts need to be implemented using the conventional FDE algorithm.
Table 1

Numbers of monitored SV subsets


Conventional FDE method

Integrated FDE scheme




GPS + BDS + Galileo



GPS + BDS + Galileo + GLONASS



Figure 6 verifies the effectiveness of the proposed FDE scheme. A Monte-Carlo simulation with 107 trials is performed, and GPS almanac is employed to simulate the SV positions. Figure 6 corresponds to the case where the user is located at Shanghai, China, with 7 GPS satellites in view. To simulate the faulted condition, a measurement fault with the varying magnitude from 0 to 50 m is injected to SV 4. The results are presented in terms of the probability of CE (left) and NE (right). Because the performance of the conventional FDE methods depends on the allocated continuity budget \( P_{\text{FDNE,REQ}} \), three scenarios are considered, and their results are, respectively, labeled green, blue and black in Fig. 6. The red solid lines in both figures correspond to the newly proposed FDE scheme. Using the new approach, maximum CE probabilities can always be obtained without any continuity interruptions, which surpasses the conventional methods.
Fig. 6

Effectiveness of the integrated FDE scheme

Figure 7 presents the IR associated with the integrated FDE scheme. Using the same GPS almanac as the one for Fig. 6, the results are evaluated over a 1-day period at Shanghai, China. According to the figure, the red line is in between the green and blue lines, which indicates that new FDE approach may lead to larger IR than the conventional approach in some special cases. However, unlike the conventional approach whose IR reduction comes at the cost of increased continuity risk, the IR of our proposed FDE scheme is fixed. Therefore, the operational continuity can always be fully preserved using the integrated FDE scheme, which is the key advantage of this method.
Fig. 7

IR of the integrated FDE scheme over a day

6 Conclusion

This paper proposes a novel integrity monitoring scheme against GNSS fault for civil aviation navigation. The main contributions are (a) developing an efficient user algorithm that integrates FDE functions, and (b) deriving the analytical methods to quantify its corresponding IR. In this work, the projection matrix is derived for single satellite failure modes, the mechanism for determining exclusion subset is established based on the projection magnitudes, and the false exclusion probabilities are rigorously accounted in the IR quantification. Using the new approach, the computational load of the user is significantly reduced, and the effectiveness of correctly excluding the faulted SV is improved. In addition, full continuity can be preserved using this approach, while achieving promising IR that is in the same order of the magnitude as the conventional FDE methods. In the future work, multiple fault modes will be accounted, and the scheme will be validated under baseline ARAIM simulation scenarios.

Supplementary material

42401_2019_39_MOESM1_ESM.rar (1004 kb)
Supplementary material 1 (RAR 1003 kb)


  1. 1.
    ICAO (2009) Annex 10, Aeronautical telecommunications, volume 1 (radio navigation aids), Amendment 84, published 20 July 2009, effective 19 November 2009Google Scholar
  2. 2.
    ICAO (2016) Doc 9750-AN/963, 2016–2030 Global air navigation plan, fifth edition, 2016Google Scholar
  3. 3.
    Pervan B (1996) Navigation integrity for aircraft precision landing using the global positioning system. Ph.D. Dissertation, Department of Aeronautics and Astronautics, Stanford Univ., Stanford, CA, 1996Google Scholar
  4. 4.
    Zhai Y, Joerger M, Pervan B (2016) H-ARAIM exclusion: requirements and performance. In: Proceedings of the 29th international technical meeting of the satellite division of the institute of navigation, Portland, Oregon, September 2016, pp 1713–1725Google Scholar
  5. 5.
    Zhai Y, Joerger M, Pervan B (2017) Bounding continuity risk in H-ARAIM FDE. In: Proceedings of the ION 2017 Pacific PNT Meeting, Honolulu, Hawaii, May 2017, pp 20–35Google Scholar
  6. 6.
    Zhai Y, Zhan X, Joerger M, Pervan B (2019) Impact quantification of satellite outages on air navigation continuity. IET Radar Sonar Navig 13(3):376–383CrossRefGoogle Scholar
  7. 7.
    RTCA Special Committee 159 (1991) Minimum operational performance standards for airborne supplemental navigation equipment using global positioning system (GPS). In: RTCA/DO-208, July 1991Google Scholar
  8. 8.
    RTCA Special Committee 159 (2004) Minimum aviation system performance standards for the local area augmentation system (LAAS). In: RTCA/DO-245, 2004Google Scholar
  9. 9.
    RTCA Special Committee 159 (2006) Minimum operational performance standards for global positioning system/wide area augmentation system airborne equipment. In: RTCA/DO-229D, Washington, DC, 2006Google Scholar
  10. 10.
    Lee YC (1986) Analysis of range and position comparison methods as a means to provide GPS integrity in the user receiver. In: Proceedings of the 42nd annual meeting of the institute of navigation, Seattle, WA, 1986, pp 1–4Google Scholar
  11. 11.
    Parkinson BW, Axelrad P (1988) Autonomous GPS integrity monitoring using the pseudorange residual. Navigation 35(2):255–274CrossRefGoogle Scholar
  12. 12.
    Joerger M, Chan F-C, Pervan B (2014) Solution separation versus residual-based RAIM. Navig J Inst Navig 61(4):273–291CrossRefGoogle Scholar
  13. 13.
    Gibbons G (2012) Munich summit charts progress of GPS, GLONASS, Galileo, Beidou GNSSes. In: Inside GNSS, March 20, 2012Google Scholar
  14. 14.
    Federal Aviation Administration (FAA) (2010) Phase II of the GNSS evolutionary architecture study. Accessed 16 Nov 2019
  15. 15.
    Blanch J, Walter T, Enge P, Lee Y, Pervan B, Rippl M, Spletter A, Kropp V (2015) Baseline advanced RAIM user algorithm and possible improvements. IEEE Trans Aerosp Electron Syst 51(1):713–732CrossRefGoogle Scholar
  16. 16.
    EU-U.S. Cooperation on Satellite Navigation, Working Group C (2016) ARAIM technical subgroup milestone 3 report. Accessed 16 Nov 2019
  17. 17.
    Joerger M, Pervan B (2016) Fault detection and exclusion using solution separation and Chi squared RAIM. IEEE Trans Aerosp Electron Syst 52(2):726–742CrossRefGoogle Scholar
  18. 18.
    Blanch J, Walter T, Enge P (2017) Protection levels after fault exclusion for advanced RAIM. Navig J Inst Navig 64(4):505–513CrossRefGoogle Scholar
  19. 19.
    Zhai Y, Joerger M, Pervan B (2018) Fault exclusion in multi-constellation global navigation satellite systems. J Navig 71(6):1281–1298CrossRefGoogle Scholar
  20. 20.
    Zhai Y, Zhan X, Chang J, Pervan B (2019) ARAIM with more than two constellations. In: Proceedings of the ION 2019 Pacific PNT meeting, Honolulu, Hawaii, April 2019, pp 925–941Google Scholar
  21. 21.
    Walter T, Gunning K, Blanch J (2018) Validation of the unfaulted error bounds for ARAIM. Navig J Inst Navig 65(1):117–133CrossRefGoogle Scholar
  22. 22.
    Zhai Y (2018) Ensuring navigation integrity and continuity using multi-constellation GNSS. Ph.D. Dissertation, Illinois Institute of Technology, Chicago, IL, 2018Google Scholar
  23. 23.
    Chan F-C, Joerger M, Khanafseh S, Pervan B (2014) Bayesian fault-tolerant position estimator and integrity risk bound for GNSS navigation. J Navig 67(5):753–775CrossRefGoogle Scholar
  24. 24.
    Walter T, Blanch J, Enge P (2014) Reduced subset analysis for multi-constellation ARAIM. In: Proceedings of the 2014 international technical meeting of the institute of navigation, San Diego, California, January 2014, pp 89–98Google Scholar
  25. 25.
    Ge Y, Wang Z, Zhu Y (2017) Reduced ARAIM monitoring subset method based on satellites in different orbital planes. GPS Solut 21(4):1443–1456CrossRefGoogle Scholar

Copyright information

© Shanghai Jiao Tong University 2019

Authors and Affiliations

  1. 1.Shanghai Jiao Tong UniversityShanghaiChina

Personalised recommendations