# Multiple particle filtering for tracking wireless agents via Monte Carlo likelihood approximation

- 14 Downloads

## Abstract

The localization of multiple wireless agents via, for example, distance and/or bearing measurements is challenging, particularly if relying on beacon-to-agent measurements alone is insufficient to guarantee accurate localization. In these cases, agent-to-agent measurements also need to be considered to improve the localization quality. In the context of particle filtering, the computational complexity of tracking many wireless agents is high when relying on conventional schemes. This is because in such schemes, all agents’ states are estimated simultaneously using a single filter. To overcome this problem, the concept of multiple particle filtering (MPF), in which an individual filter is used for each agent, has been proposed in the literature. However, due to the necessity of considering agent-to-agent measurements, additional effort is required to derive information on each individual filter from the available likelihoods. This is necessary because the distance and bearing measurements naturally depend on the states of two agents, which, in MPF, are estimated by two separate filters. Because the required likelihood cannot be analytically derived in general, an approximation is needed. To this end, this work extends current state-of-the-art likelihood approximation techniques based on Gaussian approximation under the assumption that the number of agents to be tracked is fixed and known. Moreover, a novel likelihood approximation method is proposed that enables efficient and accurate tracking. The simulations show that the proposed method achieves up to 22% higher accuracy with the same computational complexity as that of existing methods. Thus, efficient and accurate tracking of wireless agents is achieved.

## Keywords

Wireless sensor networks Localization Tracking Particle filtering Multiple particle filtering## Abbreviations

- AAM
Agent-to-agent measurement

- AWGN
Additive white Gaussian noise

- CFD
Computational fluid dynamics

- CI
Confidence interval

- EGA
Extended Gaussian approximation

- GA
Gaussian approximation

- LBM
Lattice Boltzmann model

- MAE
Mean absolute error

- MC
Monte Carlo

- MCA
Monte Carlo approximation

- MNS
Measurement noise scenario

- MPF
Multiple particle filter

- PD
Proposal distribution

- PE
Point-estimate approximation

- PF
Particle filter

- PPA
Particles per agent

- RFS
Random finite set

## 1 Introduction

Technological advances have opened the door for a broad variety of applications of mobile agents such as robots, drones, and sensory agents, ranging from surveillance and delivery tasks to aerial and underwater exploration. In many of these applications, accurate localization of the agents is essential for successful task completion. However, such localization often cannot be achieved using satellite-based systems such as GPS. Instead, in many cases, a set of local beacons^{1} is used, which facilitate range- and/or bearing-based localization.

However, in many scenarios, access to beacons is limited. This may be because their placement is costly or because good beacon locations cannot be determined a priori. Examples of such scenarios include those involving underwater robots, rescue robots operating in cases of disasters such as earthquakes, and the exploration of subterranean fluid-carrying structures and pipes [1, 2]. In particular, the inspection of pipeline systems, e.g., predictive maintenance, is an emerging application of miniature wireless agents. This approach promises to make obsolete the bulky robots that are currently used, which require maintenance shutdowns and thus result in production downtimes. For many of these pipeline systems, the placement of beacons would be extremely costly. In all these application cases, this beacon access limitation can be overcome through the use of numerous cooperating agents conducting range and/or bearing measurements between pairs of agents. In such application cases, on which this paper focuses henceforth, the number of agents to be tracked is fixed and known a priori. Moreover, due to size constraints imposed on the agents and corresponding energy limitations, offline processing of the agents’ readings, including the localization, is considered in this work. Access to the agents’ readings is, thus, only possible after the agents have been extracted from their operation domain.

Because many agents are needed to accomplish these tasks, computationally efficient methods are needed for localization. Classical (single) particle filters (PFs), for example, suffer from the “curse of dimensionality”, i.e., the fact that an exponentially increasing number of particles is required to accurately capture the posterior distribution as more agents need to be tracked [3]. This is because the dimensionality of the state space grows in proportion to the number of agents. To this end, the concept of multiple particle filtering (MPF), in which one particle filter is employed for each agent individually, has been proposed. However, in scenarios with limited access to beacons, such a localization process relies in large part on measurements between agents (agent-to-agent measurements (AAMs)) rather than measurements between agents and beacons (beacon-to-agent measurements). This results in a nonseparable likelihood \(p(y_{\mathbbm {i},k} | \boldsymbol {x}_{\mathbbm {i},k}, \boldsymbol {x}_{\mathbbm {j},k})\) because each measurement depends on two filters (the filter for agent \(\mathbbm {i}\)’s state \(\boldsymbol {x}_{\mathbbm {i},k}\) at time step *k* and the filter for agent \(\mathbbm {j}\)’s state \(\boldsymbol {x}_{\mathbbm {j},k}\) at time step *k*), whereas the solution to the localization problem requires a likelihood for each individual filter, \(p(\boldsymbol {y}_{\mathbbm {i},k} | \boldsymbol {x}_{\mathbbm {i},k})\) and \(p(\boldsymbol {y}_{\mathbbm {j},k} | \boldsymbol {x}_{\mathbbm {j},k})\). Here, \(\boldsymbol {x}_{\mathbbm {i},k}, y_{\mathbbm {i},k}\) denote the state vector and measurements of agent \(\mathbbm {i}\) at timestep *k*, respectively.

To address this issue, a new approximation of the sought likelihood is proposed that enables higher tracking accuracy with lower computational complexity compared with what can be achieved using the existing methods in the literature.

### 1.1 Related works

The literature is rich in PF-based localization algorithms; however, these algorithms are mainly used to track single or non-interacting^{2} agents or targets [4, 5, 6, 7]. To mitigate the enormous computational complexity that is associated with tracking many wireless agents, the concept of multiple particle filtering has been proposed. The problem of obtaining likelihood information for each individual filter based on a nonseparable likelihood was first discussed in [8]. The authors proposed a technique based on the replacement of the dependency on \(\boldsymbol {x}_{\mathbbm {j},k}\) with a dependency on an estimate of agent \({\mathbbm {j}}\)’s predicted state, \(\hat {\boldsymbol {x}}_{\mathbbm {j},k} = \sum _{{\ell }=1}^{L} w_{\mathbbm {j},k-1}^{(\ell)} \boldsymbol {x}_{\mathbbm {j},k}^{(\ell)}\), where the \(\left \{ {w}_{\mathbbm {j},k-1}^{(\ell)} \boldsymbol {x}_{\mathbbm {j},k}^{(\ell)} \right \}_{\ell }\) terms represent the corresponding particles and their weights from agent \({\mathbbm {j}}\)’s filter. This method is called point estimate approximation (PE) throughout the remainder of this manuscript because it employs the point estimates \(\hat {\boldsymbol {x}}_{\mathbbm {j},k}\).

After a series of additional papers on this topic by the same authors [9, 10, 11, 12, 13], in [14], the authors proposed an improved version based on an approximation of the measurement function around the mean of agent \(\mathbbm {j}\)’s states via Taylor approximation. This scheme assumes the likelihood to be Gaussian, leaving only its mean and variance to be determined. The mean is derived from a second-order Taylor approximation, while the variance is derived only from first-order approximations. Moreover, the corresponding equations are provided only for additive Gaussian noise and scalar states. Consequently, to apply this scheme to the problem at hand, the generalization derived in this work is required.

Scenarios with uncertainty in the motion and/or the measurement model are covered in [15, 16], for example. In both works, a finite collection of models is considered to address, e.g., the problem of tracking persons who can change their modes of transportation (cycle, bus, train, or car). Instead of resorting to model switching, both works exploit Bayesian model averaging [17] in combination with sequential Monte Carlo methods. Theoretically, the lack of information in \(p(y_{\mathbbm {i},k}|\boldsymbol {x}_{\mathbbm {i},k} \boldsymbol {x}_{\mathbbm {j},k})\) about \(\boldsymbol {x}_{\mathbbm {j},k}\) could be interpreted as a lack of information about the measurement function such that the different models \(\mathcal {M}_{q}\) correspond to \(p\left (y_{\mathbbm {i},k}|\boldsymbol {x}_{\mathbbm {i},k} \boldsymbol {x}_{\mathbbm {j},k}^{(q)}\right)\), where \(\left \{\boldsymbol {x}_{\mathbbm {j},k}^{(q)}\right \}_{q=1,\ldots,Q}\) is a finite collection of samples of possible agent states. However, this interpretation, and thus also the methods presented in both works, is inappropriate because it would require each agent model to change at every time step. Moreover, an enormous collection of samples (*q*=1,…,*Q*) would be needed to cover all possible agent states, which would result in high computational complexity.

The literature on related but different scenarios, in which the number of targets or agents to be tracked is unknown, offers a rich variety of solution approaches, which are often based on random finite sets (RFSs) (see, e.g., [18] for a review). In contrast to our scenario, where the number of, e.g., rescue robots deployed in a zone is known a priori, RFS scenarios include the tracking of unknown or hostile aircraft using radar [19], pedestrian tracking [20], active speaker tracking [21], and extended object tracking, in which multiple objects generate an unknown number of reflections that cannot be assigned a priori to the different targets [22]. In our scenario, however, the number of agents is fixed and known, and thus, there is no need to consider RFS theory or the additional overhead associated with the estimation of the number of targets. In the scenario considered in this work, the demand for many agents, their cooperative nature, and the resulting agent-to-agent measurements are the predominant factors necessitating the schemes presented in this work.

### 1.2 Contribution

To overcome the nonseparability of the likelihood, a new method is proposed in this work that does not rely on Taylor approximation and, thus, on the computation of gradients or Hessians. Such a method is of particular interest in the case of nondifferentiable measurement functions. Instead of relying on derivatives, the proposed method uses an approach that exploits Monte Carlo integration, in which the filter output from the previous time step is exploited to obtain likelihood information for the individual filters. This method is henceforth referred to as Monte Carlo approximation (MCA).

Moreover, to enable this scheme to be compared with state-of-the-art methods, the concept presented in [14] is generalized to support multidimensional states. Additionally, it is generalized to multiplicative noise scenarios, which are commonly encountered in the context of distance-based localization, as such scenarios reflect the property that measurements between agents separated by farther distances are subject to stronger noise [23, 24, 25, 26]. This generalized algorithm is henceforth referred to as Gaussian approximation (GA).

In the presented simulations, it is shown that the method proposed in this work outperforms all three methods considered for comparison, i.e., PE [8], GA (generalized from [14]), and EGA, the last of which exploits full second-order approximations. Moreover, it is shown that the proposed method achieves higher localization accuracy within the same computing time.

### 1.3 Organization

This work is structured as follows. Section 2 introduces the concept of using MPF for tracking wireless agents. Section 3.1 presents the GA likelihood approximation technique. Sections 3.1.3 and 3.1.4 extend this scheme to present the EGA method. In Section 4, the proposed method is derived and presented. In Section 5, the simulation setup and the performance metric are discussed. In Section 6, numerical results are presented, and all four considered methods are evaluated and compared. Finally, conclusions are drawn in Section 7.

## 2 Problem introduction and system model

This paper considers scenarios in which many wireless agents are localized using pairwise measurements, such as distance and/or bearing measurements. By the nature of the abovementioned scenarios (cf. Section 1), a predefined and known number of agents are employed which are localized offline, after all agents’ readings have been extracted in a fusion center. Consequently, no uncertainty regarding the number of agents is considered. Moreover, access to beacons is assumed to be available but limited, i.e., insufficient for agents to be accurately localized solely by utilizing beacon-to-agent measurements. For this reason, measurements between mobile agents (AAMs) become increasingly relevant.

Because distance and bearing measurements as well as realistic agent motions are nonlinear, PF approaches are considered in this work. Subsequently, the important properties of such approaches are briefly summarized.

In [27], the convergence of Monte Carlo approximation in terms of the mean square error was shown to be of \(\mathcal {O}({L^{-1}})\), where *L* is the number of particles. Moreover, it has been shown that the error is (in theory) independent of the state dimensionality. In practice, however, the dimensionality plays a significant role in determining the performance of PF techniques [3, 28]. For example, [3] reported an exponential relationship between the number of particles and the state dimensions (“curse of dimensionality”). Due to the high state-space dimensionality, classical PF approaches demand unreasonable resources for tracking many agents. Consequently, the concept of multiple particle filtering, in which one filter is used for each agent to be tracked, is considered instead.

A brief summary of the concepts relevant to particle filter (PF) and multiple particle filter (MPF) is presented below.

### 2.1 Particle filtering

where \(\boldsymbol {x}_{\mathbbm {i},k} \in \mathbb {R}^{n_{x}}\) represents the state of agent \(\mathbbm {i}\) at discrete time *k* and \({ \boldsymbol {y}_{\mathbbm {i},k}} \in \mathbb {R}^{n_{y}}\) represents the corresponding measurements of this agent, each of which, by the nature of AAMs, also depends on the state vector of the other agent involved in that measurement, \(\boldsymbol {x}_{\mathbbm {j},k}\) (henceforth, the notation \(\mathbbm {j}\) will be used to denote an agent other than \(\mathbbm {i}\)). Moreover, **v**_{k} and **η**_{k} denote the process noise and measurement noise, respectively. The generally nonlinear functions \(\tilde {f}(\cdot)\) and \(\tilde {h}(\cdot)\) denote the state evolution and measurement models, respectively, with noise included.

For general nonlinear state-space models, these integrals cannot be computed analytically.

*L*weighted particles, denoted by \(\left \{\left (w_{\mathbbm {i},k}^{(\ell)}, \boldsymbol {x}_{\mathbbm {i},k}^{(\ell)}\right)\right \}^{L}_{\ell =1}\), such that

*δ*(·) is the Dirac delta function and

*ℓ*∈{1,…,

*L*} denotes a particle index. The weights are defined as

where \(\pi (\boldsymbol {x}_{\mathbbm {i},0:k}| \boldsymbol {y}_{\mathbbm {i},1:k})\) denotes a proposal distribution (PD). A PD is used because in most cases, directly sampling from \(p(\boldsymbol {x}_{\mathbbm {i},0:k}| \boldsymbol {y}_{\mathbbm {i},1:k})\) is impossible because it would require solving complex and high-dimensional integrals for which no general analytical solution is known [28].

*π*(·) and, as discussed in Section 2, on the number of particles

*L*. Regarding the former, it is known that an incremental variance-optimal PD is given by

which further simplifies the PF (cf. (7)).

### 2.2 Multiple particle filtering

*M*agents, making the computation of the likelihood readily available.

By contrast, in MPF, a separate PF is employed for every agent, resulting in the particle representation illustrated in Fig. 1b. The benefits are a reduced state dimensionality on a per-PF basis and the resulting improved convergence properties. However, the separation of the PFs in MPF causes the likelihood, which is required to compute the weight update, for example (cf. (7)), to depend on two particles representing two parallel, decoupled PFs (i.e., the likelihood is nonseparable). In fact, in the context of distance- and/or bearing-based tracking in our scenario, the likelihood is nonseparable for the following two reasons: first, the necessity of considering agent-to-agent measurements, and second, the fact that both types of measurements inherently depend on the states of both agents conducting the measurements and their corresponding filters.

In terms of likelihoods, the use of MPF in the context of distance- and/or bearing-based tracking is associated with the following problem: In MPF, only \( p(\boldsymbol {y}_{\mathbbm {i},k}|\boldsymbol {x}_{\mathbbm {i},k}, \boldsymbol {x}_{\mathbbm {j},k})\) is readily available, where \(\boldsymbol {x}_{\mathbbm {i},k}\) and \(\boldsymbol {x}_{\mathbbm {j},k}\) are described by particles of different PFs. However, for the weight update in each PF (cf. (7) and line ?? in Algorithm 1), the full likelihood \(p(\boldsymbol {y}_{\mathbbm {i},k}| \boldsymbol {x}_{\mathbbm {i},k})\) is required. Therefore, special procedures are required to approximate the required likelihood. The proposal of new procedures for this task is the main topic of this work.

To this end, the following sections discuss three different techniques for approximating \(p(\boldsymbol {y}_{\mathbbm {i},k}| \boldsymbol {x}_{\mathbbm {i},k})\) for each PF based on the particle information from all parallel PFs \(\{p(\boldsymbol {y}_{\mathbbm {i},k}|\boldsymbol {x}_{\mathbbm {i},k}, \boldsymbol {x}_{\mathbbm {j},k})\}_{\mathbbm {i}}\).

## 3 Gaussian likelihood approximation via Taylor approximation

The general idea presented in [14] is to approximate the sought likelihood \(p(y_{\mathbbm {i},k}| \boldsymbol {x}_{\mathbbm {i},k})\) by a Gaussian distribution, leaving only the mean \(\mathbb {E}[y_{\mathbbm {i},k}| \boldsymbol {x}_{\mathbbm {i},k}]\) and variance \(\mathbb {V}[y_{\mathbbm {i},k}| \boldsymbol {x}_{\mathbbm {i},k}]\) to be computed. In this case, \(\boldsymbol {x}_{\mathbbm {j},k}\) is treated as a random variable, and its moments are deduced from the particles.

Subsequently, the derivations for additive white Gaussian noise (AWGN)- and multiplicative Gaussian noise-corrupted measurement are given in Sections 3.1.3 and 3.1.4, respectively. Note that both approximations are needed due to the assumption of multiplicative Gaussian noise- and AWGN-corrupted distance and bearing measurements, respectively.

### 3.1 Taylor approximation

where \( \tilde {\boldsymbol {x}}_{\mathbbm {j},k} \equiv \boldsymbol {x}_{\mathbbm {j},k} - \bar {\boldsymbol {x}}_{\mathbbm {j},k} \) and \(\mathbb {E}[\boldsymbol {x}_{\mathbbm {j},k}] \equiv \bar {\boldsymbol {x}}_{\mathbbm {j},k} \). Moreover, let * g*(·) denote the gradient and

*(·) denote the Hessian evaluated at \(\bar {\boldsymbol {x}}_{\mathbbm {j},k}\), which are henceforth denoted simply by*

**H***and*

**g***, respectively.*

**H**#### 3.1.1 First moment of Taylor approximation

### **Lemma 1**

*=Cov[*

**Σ***]; then,*

**x**where * A* is a real square matrix.

*†*) below:

where \(\boldsymbol {\Sigma }_{\boldsymbol {x}_{\mathbbm {j},\boldsymbol {k}}} \equiv \text {Cov}\left [\boldsymbol {x}_{\mathbbm {j},k}\right ]\). Consequently, \(\mathbb {E}\left [\tilde {\mathbf {x}}_{\mathbbm {j},k}\right ] = \mathbf {0} \).

### **Lemma 2**

**x**be multivariate Gaussian with mean

*and covariance matrix*

**μ***; then,*

**Σ**where (**A*** Σ*)

^{2}=

**A**

**Σ****A**

*and*

**Σ****A**is a real square matrix.

#### 3.1.2 Second moment of Taylor approximation

*‡*), Lemma 2 is used, and consequently, the assumption of Gaussian states \(\boldsymbol {x}_{\mathbbm {j},k}\) is made. Moreover, under the same assumptions, Appendix A shows that the third-order moment in Eq. 16 vanishes; that is:

#### 3.1.3 Synthesis: Additive white Gaussian noise measurements

#### 3.1.4 Synthesis: Multiplicative Gaussian noise measurements

The variance can again be obtained via (22).

## 4 Monte Carlo-based likelihood approximation

In the following, a Monte Carlo (MC)-based approximation of the sought likelihood is derived. This approximation is believed to be more accurate than the presented GA-based methods because it does not rely on the assumption that the corresponding likelihood can be modeled by a Gaussian distribution. Moreover, our MCA method does not exploit only the first- and second-order statistics of the particles; instead, it uses the complete information carried by the particles.

*¶*) in (26) assumes that the current measurements of agent \(\mathbbm {i}\) and the collection of past measurements of agent \(\mathbbm {j}\) are approximately conditionally independent given the current state of agent \(\mathbbm {i}\). Using the importance sampling description of the previous posterior, \(p\left (\boldsymbol {x}_{\mathbbm {j},k} \left | \boldsymbol {y}_{\mathbbm {j},1:k-1}\right.\right)\) equals:

*§*). Thus, the sought likelihood can be written as follows (cf. (26)):

*n*|

*ℓ*

^{′}) is used to illustrate the dependency on the particle

*ℓ*

^{′}. Finally, the likelihood that is sought is obtained as follows:

Because additional sampling and corresponding evaluations of the probability density function given in (32) is computationally demanding, a simplification is proposed: Instead of sampling new particles \(\boldsymbol {x}_{\mathbbm {j},k}^{(n|\ell ')}\) as per (31), the time update samples (??), which are sampled from the same probability density function, are reused. With this procedure, approximation (??) is obtained which can be interpreted as a particular instance of (32) for *n*=1. Therefore, an easily implementable approximation of the sought likelihood is obtained that avoids the computation of gradients and Hessians, which are needed in GA-based methods. The full bootstrap-like algorithm employing the proposed approximation is described in Algorithm 2, where \(\mathcal {A}_{{y}_{\mathbbm {i}}, \mathbbm {i}}\) denotes the set of agents from which measurements have been recorded by agent \(\mathbbm {i}\) at the current time step.

## 5 Simulation setup and method

This section introduces the chosen simulation setups, including the environment, as well as the state evolution and measurement models.

### 5.1 System model

where \(\mathrm {x}_{\mathbbm {i},k}\) and \(\mathrm {y}_{\mathbbm {i},k}\) are the agent’s Cartesian *x* and *y* positions, respectively; \(v_{\mathbbm {i},k} = \sqrt {\dot {\mathrm {x}}_{\mathbbm {i},k}^{2}+ \dot {\mathrm {y}}_{\mathbbm {i},k}^{2}}\) is the agent’s speed; and \(\phi _{\mathbbm {i},k} = \text {atan}(\dot {\mathrm {y}}_{\mathbbm {i},k}/\dot {\mathrm {x}}_{\mathbbm {i},k}) \) and \(\omega _{\mathbbm {i},k} = \dot {\phi }_{\mathbbm {i},k}\) are the agent’s heading angle and turning rate, respectively.

where *T* is the sampling period and \(\sigma _{\dot {v}}\) and \(\sigma _{\dot {\omega }}\) are the standard deviations of the noise processes that correspond to the linear acceleration and angular acceleration, respectively.

Consequently, it is assumed that \(p(\boldsymbol {x}_{\mathbbm {i},k}| \boldsymbol {x}_{\mathbbm {i},k-1}) = \mathcal {N}\,(f(\boldsymbol {x}_{\mathbbm {i},k-1}), \boldsymbol {\Sigma })\), where *f*(·) is the noise-free state evolution model given in Eq. 33.

are considered, where \(\boldsymbol {\eta }_{k}=[\eta _{d,k}, \eta _{b,k}]^{\intercal } \sim \mathcal {N}(\boldsymbol {0}, \boldsymbol {\Gamma })\), with \(\boldsymbol {\Gamma } = \text {diag}(\sigma _{d}^{2}, \sigma _{b}^{2})\), where *σ*_{d} and *σ*_{b} are the standard deviations of the noise in the distance and the bearing measurements, respectively. The multiplicative model for the distance measurements accommodates the observation that distance measurements made with respect to farther agents are less accurate.

To obtain the results presented in the following section, the sampling period was set to *T*=1 s, and the process noise parameters were set to \(\sigma _{\dot {v}}=3.16 \times 10^{-3}\) m/s^{2}, \(\sigma _{\dot {\omega }}=3.16 \times 10^{-3}\) rad/s^{2}, and *σ*_{x}=*σ*_{y}=0.30 m.

### 5.2 Simulation setup I

- For agent 1:$${\begin{aligned} \left[ \omega_{1}(kT) \right]_{k} &= \left[\boldsymbol{0}_{15},\ \Pi_{\frac{1}{2}}^{10},\ -\Pi_{\frac{1}{2}}^{10},\ \boldsymbol{0}_{10},\ -\Pi_{\frac{1}{2}}^{12},\ \Pi_{\frac{1}{2}}^{22},\ \boldsymbol{0}_{9} \right]\\ \left[ v_{1}(kT) \right]_{k} &= \left[ 0.15\cdot\boldsymbol{1}_{5},\ 0.075\cdot \boldsymbol{1}_{83} \right] \end{aligned}} $$
- For agent 2:$$\begin{array}{*{20}l} \left[ \omega_{2}(kT) \right]_{k} &= \left[\boldsymbol{0}_{17},\ \Pi_{\frac{3}{2}}^{20},\ \boldsymbol{0}_{10},\ -\Pi_{\frac{3}{2}}^{20},\ \boldsymbol{0}_{21} \right]\\ \left[ v_{2}(kT) \right]_{k} &= \left[ 0.2\cdot\boldsymbol{1}_{88} \right] \end{array} $$
- For agent 3:$$\begin{array}{*{20}l} \left[ \omega_{3}(kT) \right]_{k} &= \left[\boldsymbol{0}_{22},\ \Pi_{\frac{7}{4}}^{20},\ \boldsymbol{0}_{10},\ -\Pi_{\frac{7}{4}}^{20},\ \boldsymbol{0}_{16} \right]\\ \left[ v_{3}(kT) \right]_{k} &= \left[ 0.2\cdot\boldsymbol{1}_{5},\ 0.15\cdot \boldsymbol{1}_{83} \right] \end{array} $$
- For agent 4:$${\begin{aligned} \left[ \omega_{4}(kT) \right]_{k} &= \left[\boldsymbol{0}_{15},\ -\Pi_{\frac{1}{2}}^{22},\ \Pi_{\frac{1}{2}}^{12},\ \boldsymbol{0}_{10},\ \Pi_{\frac{1}{2}}^{10},\ -\Pi_{\frac{1}{2}}^{10},\ \boldsymbol{0}_{9} \right]\\ \left[ v_{4}(kT) \right]_{k} &= \left[ 0.15\cdot\boldsymbol{1}_{5},\ 0.075\cdot \boldsymbol{1}_{83} \right] \end{aligned}} $$

Here, **0**_{n} and **1**_{n} denote a sequence of *n* zeros and a sequence of *n* ones, respectively. Moreover, \(\Pi _{a}^{b}\) denotes the sequence that yields a *a*·*π* turn in *b* steps.

Overview of the evaluated algorithms

Abbr. | Proposed in | Discussed in | Comment |
---|---|---|---|

PE | [8] | — | Eliminates the dependency of the likelihood on the other agent’s state by means of the point estimate \(\boldsymbol {\hat {x}}_{\mathbbm {j},k} = \sum _{\ell =1}^{l} w_{\mathbbm {j},k-1}^{\ell } \boldsymbol {x}_{\mathbbm {j},k.}^{\ell }\) |

GA | [14] (basic form) | Section 3.1 (generalized form) | Approximates the likelihood as a Gaussian density using first- and second-order terms for the variance and mean, respectively. Treats \(\boldsymbol {x}_{\mathbbm {j},k}\) as a random variable in the likelihood. |

EGA | This work | Extension of GA to a complete second-order approx. for both additive and multiplicative noise. | |

MCA | This work | Section 4 | MC-based likelihood approx. |

The setup is chosen for the following reasons: First, the agents follow predefined trajectories that, apart from the introduced abrupt changes with respect to velocity and turn rate, are fully consistent with the chosen motion model (33). Consequently, any tracking inaccuracy resulting from model mismatch, which will inherently arise in any real-world scenario, is strongly limited. This, in turn, facilitates a more in-depth analysis and fairer comparison of the different likelihood approximation methods, which target only the measurement function (cf. (2a)). Second, despite the use of predefined trajectories, the maneuvers are complex and challenging due to the variation in their motion profiles, which include a broad set of different turns and acceleration phases. Thus, adequate likelihood approximation schemes are required to enable accurate localization.

The reported results below are related to the following two measurement noise scenarios (MNSs): *MNS 1:* Multiplicative measurement noise with a standard deviation of *σ*_{d}=0.04 and bearing measurement noise with a standard deviation of *σ*_{b}=5^{∘} and *MNS 2:* *σ*_{d}=0.06,*σ*_{b}=10^{∘}.

### 5.3 Simulation setup II

^{∘}C. The LBM parameters used for the CFD simulation are listed in Table 3. Details such as the boundary method implemented for the D2Q9 LBM are given in [34], where this method is reported to be accurate to approximately second order.

D2Q9 LBM simulation parameters

Parameter | Value |
---|---|

LBM relaxation time | 0.515 |

LBM speed | 0.05 |

Max. LBM iterations | 1×10 |

LBM convergence threshold | 1×10 |

Viscosity [m | 0.89×10 |

Density [kg/m | 997.05 |

These simulation results are thus consistent with the application scenarios discussed in Section 1, in which miniature agents may be employed to survey underground structures or pipeline systems, such as those that exist in chemical plants. The chosen setup, however, is designed to challenge the algorithms (cf. Table 2), which is achieved through a combination of fast agent motion and limited beacon coverage.

### 5.4 Evaluation metric

The following time- and agent-averaged localization error metric is used to evaluate the algorithms’ performance: \(\text {MAE} = \frac {1}{M\cdot {}K} \sum _{\mathbbm {i}=1}^{M} \sum _{k=1}^{K} \Vert \boldsymbol {p}_{\mathbbm {i},k} - \hat {\boldsymbol {p}}_{\mathbbm {i},k} \Vert _{2}\), where \(\boldsymbol {p}_{\mathbbm {i},k}\) is the true position of agent \(\mathbbm {i}\) at time step *k* and \(\hat {\boldsymbol {p}}_{\mathbbm {i},k}\) is the corresponding estimate, which refers to the first two states of the corresponding state vector.

## 6 Numerical results and discussion

The discussion of the simulation results is split into three main parts, with each focusing on different behavior and properties of the methods. This discussion is presented in the following subsections.

For all results evaluating a performance as a function of particles per agent (PPA), the 99% confidence intervals (CIs) are shown as well. These results are visualized as a transparent hose around the average performance with the same line style and visualize the variation in the individual simulations.

### 6.1 Setup I: Localization error vs. number of particles and run time

*L*. MNS 1 and MNS 2 are represented in the top and middle panels, respectively. Moreover, in Figs. 4 and 5, a third MNS is presented in which multiplicative distance measurement noise is increased to

*σ*

_{d}=0.1 compared to MNS 2.

The left panels of Fig. 4 clearly show that PE exhibits significantly reduced performance compared to that of the other three methods. The right row of Fig. 4 focuses on the differences between the GA-based algorithms (GA and EGA) and the proposed MCA method. In these panels, the differences between the former two are generally small. However, the performance of MCA is significantly improved compared to that of the other methods. For example, for *L*=1000, an error reduction of 16% compared to EGA is achieved. The middle panels show that in MNS 2, EGA is slightly superior to GA. Most notably, the localization error reduction of 21% is achieved by MCA over EGA. Similarly, for MNS 3, MCA achieves error reductions of 24%.

Interestingly, the mean absolute error (MAE) of the PE method decreases from 0.46 (MNS 1) to 0.39 (MNS 2) to 0.35 (MNS 3) with increased measurement noise. In contrast, the MAE of the other methods increases at the same time. For example, for MCA, the MAE increases from 0.10 to 0.15 to 0.18 at these measurement noise configurations. Despite this increase in absolute terms, the *relative* improvement of MCA over EGA increases from 16% (MNS 1) to 22% (MNS 2) to 24% (MNS 3), as mentioned above. Importantly, despite the improvement in PE with increasing noise intensity, the absolute performance gains through MCA remain significant. Additionally, CIs show that the performance of PE is the least consistent.

A comparison of performance versus run time for the MNS 1 and MNS 2 results shown in Fig. 4 is presented in Fig. 5. Note that for each algorithm, an increase in computing time is directly linked to an increase in PPA value. These results show the trade-off between the different algorithms in terms of the achievable MAE performance and required computing time. Given certain requirements of either performance or computing time, these results aid in selecting algorithms for particular application scenarios. As discussed subsequently, some algorithms are limited in either of these figures, i.e., MAE performance or computing time. Similar to the previous results, the left panels of Fig. 5 show a comparison among all four algorithms, whereas the right panels of Fig. 5 focus on the GA-based methods and MCA. Because the results in Fig. 5 are deduced from the data depicted in Fig. 4, the supports for the curves are different. As such, the left panels of Fig. 5 show low run times for PE because no data are available for run times longer than 10 s. However, these low run times are not accompanied by performance values better than a localization error of 0.34 m.

The right panels of Fig. 5 (together with the right panels of Fig. 4) show that MCA not only yields improved localization performance, with a lower localization error than those of all other methods, but also achieves improved performance within the same computing time. For example, in MNS 1, a reduction in the error metric of 14% is achieved for a run time of 14 s. Similarly, for MNS 2 and MNS 3, reductions in 20% and 23% are obtained, respectively.

### 6.2 Setup I: Performance over time

Figure 6 visualizes the performance for MNS 2 with 422 PPA, which has been chosen as a compromise between a low PPA (where MCA performs similarly to EGA) and a high PPA (where MCA is able to significantly improve compared to EGA and the other methods). The estimated positions are overlaid on the agents’ actual trajectories, which are shown in red, and are plotted as markers only in the same colors as those used in Fig. 2.

It can be observed that the performance of PE (cf. Fig. 6b) deteriorates in the case of maneuvers. The agent profiles visualized with red and orange markers are particularly strongly affected. GA and EGA (cf. Fig. 6c, d) show improved localization performance compared to that of PE; this improvement is also reflected by the observed reduction of almost 50% in the localization error. However, both of these algorithms also show reduced tracking accuracy during maneuvers (cf. in particular the estimates represented by red markers in the top loop and by green and orange markers at the saddle points of the maneuvers). Figure 6a shows the performance of MCA, which encounters only minor issues with tracking (cf. the estimates represented by red markers in the top loop). This superior performance is also evident from the reduction of − 22*%* in the localization error compared with that of GA.

Figure 7b clearly shows a decreasing tracking accuracy, with the peak being in the transition zone between the coverage areas of the left and right beacons. Figure 7c and d, for GA and EGA, respectively, show significantly improved performance compared with that of PE, with some strong fluctuations (note the differences in scaling for reasons of readability). Figure 7a shows the performance of MCA. At the beginning of tracking, MCA exhibits a significant reduction in error from the initial value. Compared to all other methods, not only reduced error but also lower fluctuations in the performance over time are apparent.

### 6.3 Setup I: Localization error vs. number of agents

Figure 8 shows that the performance of PE decreases significantly with an increasing number of agents. Similar behavior can be observed for GA and EGA, with the error metric increasing by approximately 11% when four agents need to be localized instead of two. Only MCA exhibits the opposite trend, with the error metric decreasing by approximately 2% in the same case.

Similar trends can be observed in Fig. 9 for increasing measurement noise. Here, the error metric of MCA is reduced by 6.88% when four agents are tracked instead of two.

In both MNSs, this behavior can be explained by the additional agent-to-agent measurements that are provided by the additional agents. In the cases of GA and EGA, for example, these additional measurements result in significant error accumulation because the likelihoods are approximated as Gaussian distributions and, in the weight update (cf. (7)), the product of these likelihoods is taken^{3}. For MCA, however, the approximation error is lower, allowing the additional measurements provided by the extra agents to be successfully exploited.

### 6.4 Setup II: Localization error vs. number of particles and run time

For brevity, the second setup is evaluated only for the following simulation configuration. The MNS was scenario 2, and the agent and beacon coverage ranges were both *R*_{s}=10 m.

*M*=4,6,8, and 10 agents. For low PPA values, GA and EGA outperform MCA. However, GA and EGA do not achieve a MAE lower than 1 m, which can be considered critical depending on the precise application requirements. For more than approx. 200 PPA, MCA achieves the best performance, resulting in MAE reductions of between 71% (for

*M*=4) and 59% (for

*M*=10) compared to EGA. Interestingly, not only the average MAE performance of MCA improves for increasing PPA: For example, for four agents, the size of the CI for MCA decreases from 0.55 m (200 PPA) to 0.08 m (1500 PPA). Similarly, for ten agents, the CI size decreases from 0.32 m to 0.17 m at the same time.

Similar to the results discussed in Section 6, EGA outperforms GA, although marginally, throughout the simulations.

*M*=4 agents and 62 s, 63%; for

*M*=6 agents and 145 s, 32%; for

*M*=8 agents and 254 s, 45%; and for

*M*=10 agents and 389 s, a reduction of 45%.

### 6.5 Setup II: Localization error vs. number of agents

*M*=4). In contrast, not only the fluctuations but also the absolute MAE values for GA and EGA are much higher. For example, for 1500 PPA, the MAE increases by 0.43 m from an initial value of 1 m (GA) and by 0.4 m from an initial value of 0.95 m (EGA).

*M*=4 and 8 agents, respectively. For

*M*=4 agents (cf. Fig. 13), MCA can accurately track the agents’ fast motion and achieves an average MAE of 0.28 m. In contrast, the other methods quickly lose track of the agents’ motion and cannot achieve an MAE better than 0.95 m.

In the case that 8 agents are to be tracked (cf. Fig. 14), MCA tends to overestimate the turn rates of the agents, resulting in an average MAE of 0.5 m. In contrast, all other methods are, on average, unable to track the agents’ motion at all, resulting in a poor accuracy of 1.28 m at best.

### 6.6 Setup II: Summary of results

In the second simulation setup, a close-to-reality scenario is considered in which up to 10 agents are carried by fluid dynamics alone through a pipe branch. Because the agents’ motion is difficult to describe with a single motion model, the results, as expected, show more fluctuations and are more dependent on the accuracy of the motion model compared to results of the first simulation setup. Nevertheless, even in this scenario, the proposed MCA method outperforms the PE, GA, and EGA methods, achieving MAE reductions of up to 63% when the computational costs for all methods are fixed.

## 7 Conclusion and future work

In this work, a new approximation scheme for individual filter likelihoods for the tracking of wireless agents has been proposed. The proposed MCA scheme has been compared to the existing PE method and to a generalization of the Gaussian approximation concept presented in [14]. Moreover, this generalization (GA) has further been extended to the EGA method for additional comparisons. The simulation results showed clear improvements in the localization accuracy when MCA was employed. In particular, a reduction in the localization error by up to 22.81% was achieved for a fixed number of particles. Additionally, it was shown that given the same computing time, MCA achieves a tracking accuracy that is 22.35% higher than that of GA. Moreover, the tracking accuracy of MCA was shown to be more consistent over time, with lower fluctuations in performance. In contrast to the other considered methods, MCA was able to successfully exploit additional measurements introduced with an increase in the number of agents to be tracked in the synthetic benchmark (first setup). Similarly, for the second set of close-to-reality simulations (second setup), MCA was least affected by an increase in the number of agents. In numbers, MCA achieved MAE reductions between 58 and 71% compared to EGA.

In summary, the MCA scheme presented in this study offers significant improvements in localization accuracy and/or computing time. The scheme has shown potential for many application cases in which a known number of cooperating agents are to be localized using distance and/or bearing measurements if access to beacons is limited and/or AAMs are essential for accurate localization. Examples thereof are rescue robots employed for surveying collapsed buildings due to earthquakes and miniature agents used for pipeline inspection.

In future works, we will investigate the proposed scheme on a broader scale, including the effect of the MC parameter *n* on performance and run time. Moreover, an in-depth theoretical analysis of the computational complexity of all discussed algorithms is of interest in this context.

## 8 Appendix A: Proof of the third-order moment of a Gaussian approximation component

As mentioned in Section 3, the state \(\tilde {\boldsymbol {x}}_{\mathbbm {j},k}\) is assumed to be Gaussian. For simplicity, the agent and time indices are dropped in the following, such that \(\tilde {\boldsymbol {x}}_{\mathbbm {j},k}=\tilde {\boldsymbol {x}}\) and \(\boldsymbol {x}_{\mathbbm {i},k}=\boldsymbol {x}\). Generally, the *i*-th element of a vector * x* is denoted by [

*]*

**x**_{i}, and the (

*i*,

*j*)-th element of a matrix

*is denoted by [*

**X***]*

**X**_{i, j}. For simplicity, \([\tilde {\boldsymbol {x}}_{\mathbbm {j},k}]_{i}\) is denoted by \(\tilde {x}_{i}\).

and the expected value that is sought reduces to the sum of the higher-order moments of a Gaussian random vector. Subsequently, the link between the higher-order moments and the multivariate Hermite polynomials is exploited in accordance with the following theorem.

### **Theorem 1**

*ν*

_{i}are nonnegative integers. The Hermite polynomials are defined as

where \(\boldsymbol {Q}=\boldsymbol {z}^{\intercal } \boldsymbol {A} \boldsymbol {z}\).

*i*≠

*j*≠

*l*,

*i*=

*j*≠

*l*, and

*i*=

*j*=

*l*. For all of them, the following derivative is important:

### 8.1 Case 1: *i*≠*j*≠*k*

### 8.2 Case 2: *i*=*j*≠*k*

### 8.3 Case 2: *i*=*j*=*k*

## Footnotes

## Notes

### Acknowledgments

We gratefully acknowledge the computational resources provided by the RWTH Compute Cluster from RWTH Aachen University under project RWTH0118.

### Authors’ contributions

### Authors’ information

SS performed the simulations and wrote the majority of the manuscript. GA initiated the research and also commented on and approved the manuscript. Both authors read and approved the final manuscript.

All authors are with the Chair for Integrated Signal Processing Systems, RWTH Aachen University, Germany. Email addresses: schlupkothen@ice.rwth-aachen.de, ascheid@ice.rwth-aachen.de

### Funding

This project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No. 665347.

### Competing interests

The authors declare that they have no competing interests.

## References

- 1.S. Schlupkothen, G. Dartmann, G. Ascheid, A novel low-complexity numerical localization method for dynamic wireless sensor networks. IEEE Trans. Signal Process.
**63**(15), 4102–4114 (2015). https://doi.org/10.1109/TSP.2015.2422685.MathSciNetCrossRefGoogle Scholar - 2.S. Schlupkothen, A. Hallawa, G. Ascheid, in
*2018 International Conference on Computing, Networking and Communications (ICNC): Wireless Ad Hoc and Sensor Networks (ICNC’18 WAHS)*. Evolutionary algorithm optimized centralized offline localization and mapping (IEEEMaui, 2018).Google Scholar - 3.F. Daum, J. Huang, in
*2003 IEEE Aerospace Conference Proceedings (Cat. No.03TH8652)*, 4. Curse of dimensionality and particle filters, (2003), pp. 1979–1993. https://doi.org/10.1109/AERO.2003.1235126. - 4.F. Gustafsson, F. Gunnarsson, N. Bergman, U. Forssell, J. Jansson, R. Karlsson, P. J. Nordlund, Particle filters for positioning, navigation, and tracking. IEEE Trans. Signal Process.
**50**(2), 425–437 (2002). https://doi.org/10.1109/78.978396.CrossRefGoogle Scholar - 5.D. Chang, M. Fang, Bearing-only maneuvering mobile tracking with nonlinear filtering algorithms in wireless sensor networks. IEEE Syst. J.
**8**(1), 160–170 (2014). https://doi.org/10.1109/JSYST.2013.2260641.CrossRefGoogle Scholar - 6.P. Yang, Efficient particle filter algorithm for ultrasonic sensor-based 2D range-only simultaneous localisation and mapping application. IET Wirel. Sensor Syst.
**2**(4), 394–401 (2012). https://doi.org/10.1049/iet-wss.2011.0129.CrossRefGoogle Scholar - 7.B. F. Wu, C. L. Jen, Particle-filter-based radio localization for mobile robots in the environments with low-density WLAN APs. IEEE Trans. Ind. Electron.
**61**(12), 6860–6870 (2014). https://doi.org/10.1109/TIE.2014.2327553.CrossRefGoogle Scholar - 8.P. M. Djuric, T. Lu, M. F. Bugallo, in
*2007 IEEE International Conference on Acoustics, Speech and Signal Processing - ICASSP ’07*, 3. Multiple particle filtering, (2007), pp. 1181–1184. https://doi.org/10.1109/ICASSP.2007.367053. - 9.M. F. Bugallo, T. Lu, P. M. Djuric, in
*2007 IEEE Aerospace Conference*. Target tracking by multiple particle filtering, (2007), pp. 1–7. https://doi.org/10.1109/AERO.2007.353042. - 10.M. F. Bugallo, P. M. Djurić, in
*2010 IEEE Aerospace Conference*. Target tracking by symbiotic particle filtering, (2010), pp. 1–7. https://doi.org/10.1109/AERO.2010.5446681. - 11.P. M. Djurić, M. F. Bugallo, in
*2013 5th IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)*. Particle filtering for high-dimensional systems, (2013), pp. 352–355. https://doi.org/10.1109/CAMSAP.2013.6714080. - 12.M. F. Bugallo, P. M. Djurić, in
*2014 IEEE Workshop on Statistical Signal Processing (SSP)*. Gaussian particle filtering in high-dimensional systems, (2014), pp. 129–132. https://doi.org/10.1109/SSP.2014.6884592. - 13.M. F. Bugallo, P. M. Djurić, in
*2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)*. Particle filtering in high-dimensional systems with gaussian approximations, (2014), pp. 8013–8017. https://doi.org/10.1109/ICASSP.2014.6855161. - 14.P. M. Djurić, M. F. Bugallo, in
*2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)*. Multiple particle filtering with improved efficiency and performance, (2015), pp. 4110–4114. https://doi.org/10.1109/ICASSP.2015.7178744. - 15.L. Martino, J. Read, V. Elvira, F. Louzada, Cooperative parallel particle filters for online model selection and applications to urban mobility. Digit. Signal Process.
**60:**, 172–185 (2017). https://doi.org/10.1016/j.dsp.2016.09.011.CrossRefGoogle Scholar - 16.I. Urteaga, M. F. Bugallo, P. M. Djurić, in
*2016 IEEE Statistical Signal Processing Workshop (SSP)*. Sequential Monte Carlo methods under model uncertainty, (2016), pp. 1–5. https://doi.org/10.1109/SSP.2016.7551747. - 17.J. A. Hoeting, D. Madigan, A. E. Raftery, C. T. Volinsky, Bayesian model averaging: a tutorial. Stat. Sci.
**14**(4), 382–401 (1999).MathSciNetCrossRefGoogle Scholar - 18.B. -N. Vo, S. Singh, A. Doucet, Sequential Monte Carlo methods for multitarget filtering with random finite sets. IEEE Trans. Aerosp. Electron. Syst.
**41**(4), 1224–1245 (2005). https://doi.org/10.1109/TAES.2005.1561884.CrossRefGoogle Scholar - 19.M. B. Guldogan, Consensus bernoulli filter for distributed detection and tracking using multi-static doppler shifts. IEEE Signal Process. Lett.
**21**(6), 672–676 (2014). https://doi.org/10.1109/LSP.2014.2313177.CrossRefGoogle Scholar - 20.S. Reuter, K. Dietmayer, in
*14th International Conference on Information Fusion*. Pedestrian tracking using random finite sets, (2011), pp. 1–8.Google Scholar - 21.W. -K. Ma, B. -N. Vo, S. S. Singh, A. Baddeley, Tracking an unknown time-varying number of speakers using TDOA measurements: a random finite set approach. IEEE Trans. Signal Process.
**54**(9), 3291–3304 (2006). https://doi.org/10.1109/TSP.2006.877658.CrossRefGoogle Scholar - 22.A. Eryildirim, M. B. Guldogan, A Bernoulli filter for extended target tracking using random matrices in a UWB sensor network. IEEE Sensors J.
**16**(11), 4362–4373 (2016). https://doi.org/10.1109/JSEN.2016.2544807.CrossRefGoogle Scholar - 23.P. Biswas, Y. Ye, in
*Information Processing in Sensor Networks, 2004. IPSN 2004. Third International Symposium On*. Semidefinite programming for ad hoc wireless sensor network localization, (2004), pp. 46–54. https://doi.org/10.1109/IPSN.2004.1307322. - 24.P. Biswas, T. -C. Liang, K. -C. Toh, Y. Ye, T. -C. Wang, Semidefinite programming approaches for sensor network localization with noisy distance measurements.
**3**(4), 360–371 (2006). https://doi.org/10.1109/TASE.2006.877401.CrossRefGoogle Scholar - 25.P. Biswas, T. -C. Lian, T. -C. Wang, Y. Ye, Semidefinite programming based algorithms for sensor network localization. ACM Trans. Sen. Netw.
**2**(2), 188–220 (2006). https://doi.org/10.1145/1149283.1149286.CrossRefGoogle Scholar - 26.S. Schlupkothen, G. Ascheid, in
*2016 IEEE International Conference on Wireless for Space and Extreme Environments (WiSEE) (WiSEE’16)*. Joint localization and transmit-ambiguity resolution for ultra-low energy wireless sensors, (2016). https://doi.org/10.1109/wisee.2016.7877302. - 27.D. Crisan, A. Doucet, A survey of convergence results on particle filtering methods for practitioners. IEEE Trans. Signal Process.
**50**(3), 736–746 (2002). https://doi.org/10.1109/78.984773.MathSciNetCrossRefGoogle Scholar - 28.S. Särkkä,
*Bayesian Filtering and Smoothing*(Cambridge University Press, New York, 2013).CrossRefGoogle Scholar - 29.A. Doucet, A. Smith, N. de Freitas, N. Gordon,
*Sequential Monte Carlo Methods in Practice, ser. Information Science and Statistics*(Springer, 2001). https://doi.org/10.1007/978-1-4757-3437-9.zbMATHGoogle Scholar - 30.A. Doucet, S. Godsill, C. Andrieu, On sequential Monte Carlo sampling methods for Bayesian filtering. Stat. Comput.
**10**(3), 197–208 (2000). https://doi.org/10.1023/A:1008935410038.CrossRefGoogle Scholar - 31.A. M. Mathai, S. B. Provost,
*Quadratic Forms in Random Variables, ser. Statistics: A Series of Textbooks and Monographs*(Taylor & Francis, 1992).Google Scholar - 32.A. C. Rencher, G. B. Schaalje,
*Linear Models in Statistics*(Wiley, 2008).Google Scholar - 33.X. R. Li, V. P. Jilkov, Survey of maneuvering target tracking. Part I. Dynamic models. IEEE Trans. Aerosp. Electron. Syst.
**39**(4), 1333–1364 (2003). https://doi.org/10.1109/TAES.2003.1261132.CrossRefGoogle Scholar - 34.Q. Zou, X. He, On pressure and velocity boundary conditions for the lattice Boltzmann BGK model. Phys. Fluids.
**9**(6), 1591–1598 (1997). https://doi.org/10.1063/1.869307.MathSciNetCrossRefGoogle Scholar - 35.C. S. Withers, The moments of the multivariate normal. Bull. Aust. Math. Soc.
**32**(1), 103–107 (1985). https://doi.org/10.1017/S000497270000976X.MathSciNetCrossRefGoogle Scholar

## Copyright information

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.