# Noisy Euclidean distance matrix completion with a single missing node

- 267 Downloads

## Abstract

We present several solution techniques for the noisy single source localization problem, i.e. the Euclidean distance matrix completion problem with a single missing node to locate under noisy data. For the case that the sensor locations are fixed, we show that this problem is implicitly convex, and we provide a purification algorithm along with the SDP relaxation to solve it *efficiently* and *accurately*. For the case that the sensor locations are relaxed, we study a model based on facial reduction. We present several approaches to solve this problem efficiently, and we compare their performance with existing techniques in the literature. Our tools are *semidefinite programming*, *Euclidean distance matrices*, *facial reduction*, and the *generalized trust region subproblem*. We include extensive numerical tests.

## Keywords

Single source localization Noise Euclidean distance matrix completion Semidefinite programming Wireless communication Facial reduction Generalized trust region subproblem## Mathematical Subject Classification

90C22 15A83 90C20 62P30## 1 Introduction

In this paper we consider the noisy, *single source localization problem*. The objective is to locate the source of a signal that is detected by a set of sensors with exactly known locations. Distances between sensors and source are given, but contaminated with noise. For instance, in an application to cellular networks, the source of the signal is a cellular phone and the cellular towers are the sensors. Our data is the, possibly noisy, distance measurements from each sensor to the source.

The single source localization problem has applications in e.g. navigation, structural engineering, and emergency response [3, 4, 8, 24, 26, 38]. In general, it is related to distance geometry problems where the input consists of Euclidean distance measurements and a set of points in Euclidean space. The *sensor network localization problem* is a generalization of our single source problem, where there are multiple sources and only some of the distance estimates are known. The general *Euclidean distance matrix completion problem* is yet a further generalization, where sensors do not have specified locations and only partial, possibly noisy, distance information is available, e.g. [2, 13, 15]. We refer the readers to the books [1, 5, 9, 10] and survey article [27] for background and applications, and to the paper [18] for algorithmic comparisons. We also refer the readers for the related *nearest Euclidean distance matrix ** (NEDM)* problem to the papers [30, 31] where a semismooth Newton approach and a rank majorization approach is presented. The more general weighted

**NEDM**is a much harder problem though. For theory that relates

**NEDM**to semidefinite programming, see e.g. [12, 25].

A common approach to solving an instance of the single source localization problem is a modification of the least squares problem, referred to as the *squared least squares* (* SLS*) problem. We consider two equivalent formulations of

**SLS**: the

*generalized trust region subproblem*(

*) formulation; and the*

**GTRS***nearest Euclidean distance matrix with fixed sensors*(

*) formulation. We show that every extreme point of the semidefinite relaxation of*

**NEDMF****GTRS**may be easily transformed into a solution of

**GTRS**and thus a solution of the

**SLS**problem.

We also introduce and analyze several relaxations of the **NEDMF**formulation. These utilize semidefinite programming, *facial reduction*, and *parametric optimization*. We provide theoretical evidence that, generally, the solutions to these relaxations may be easily transformed into solutions of **SLS**. We also provide empirical evidence that the solutions to these relaxations may give better prediction for the location of the source.

### 1.1 Outline

In Sect. 1.2 we establish our notation and introduce background concepts. In Sect. 2.1 we prove strong duality for the **GTRS** formulation of **SLS**and in Sect. 2.2 we derive the semidefinite relaxation (**SDR** ), and prove that it is tight. We also show that the extreme points of the optimal set of **SDR** correspond exactly to the optimizers of **SLS**. A *purification* algorithm for obtaining the extreme points is presented in Sect. 2.2.1. In Sect. 3 we introduce the **NEDM** formulation as well as several relaxations. We analyze the theoretical properties of the relaxations and present algorithms for solving them. The results of numerical comparisons of the algorithms are presented in Sect. 4.

### 1.2 Preliminaries

**SDP**and the facial geometry, see e.g. [17]. We denote by \({{\mathcal {S}}^n}\) the space of \(n\times n\) real symmetric matrices endowed with the

*trace inner product*and corresponding

*Frobenius norm*,

*F*. For a convex set

*C*, the convex subset \(f\subseteq C\) is a

*face of*

*C*if for all \(x,y\in C, x,y\in f\) with \(z\in (x,y)\), (the open line segment between

*x*and

*y*) we have \(z\in f\).

*cone of positive semidefinite matrices*is denoted by \({{\mathcal {S}}^n_+\,}\) and its interior is the

*cone of positive definite matrices*, \({{\mathcal {S}}^n_{++}\,}\). The positive semidefinite cone is pointed, closed and convex. Moreover, the cone \({{\mathcal {S}}^n_+\,}\) induces a partial order on \({{\mathcal {S}}^n}\), that is \(Y\succeq X\) if \(Y - X \in {{\mathcal {S}}^n_+\,}\) and \(Y\succ X\) if \(Y-X \in {{\mathcal {S}}^n_{++}\,}\). Every face of \({{\mathcal {S}}^n_+\,}\) is characterized by the range or nullspace of matrices in its relative interior, equivalently, by matrices of maximum rank. For \(S\subseteq {{\mathcal {S}}^n_+\,}\), we denote the

*minimal face of*

*S*, \({{\,\mathrm{face}\,}}(S)\), the smallest face of \({{\mathcal {S}}^n_+\,}\) that contains

*S*. Let \(X\in {{\mathcal {S}}^n_+\,}\) have rank

*r*with orthogonal spectral decomposition.

*exposing vector*for \({{\,\mathrm{face}\,}}(X)\).

Sometimes it is helpful to vectorize a symmetric matrix. Let \({{\,\mathrm{{svec}}\,}}: {{\mathcal {S}}^n}\rightarrow \mathbb {R}^{n(n+1)/2}\) map the upper triangular elements of a symmetric matrix to a vector, and let \({{\,\mathrm{{sMat}}\,}}= {{\,\mathrm{{svec}}\,}}^{-1}\).

*centered subspace of*\({{\mathcal {S}}^n}\),

*denoted*\({{\mathcal {S}}}_C \), is defined as

*e*is

*the vector of all ones*. The

*hollow subspace of*\({{\mathcal {S}}^n}\),

*denoted*\({{\mathcal {S}}}_H\), is

*Euclidean distance matrix*,

*if there exists an integer*

**EDM***r*and points \(x^1,\cdots ,x^n \in \mathbb {R}^r\) such that

**EDM**s, denoted \({{{\mathcal {E}}}^n} \), forms a closed, convex cone with \({{{\mathcal {E}}}^n} \subset {{\mathcal {S}}}_H\).

**EDM**s are characterized by a face of the positive semidefinite cone. We state the result in terms of the Lindenstrauss mapping, \({{\,\mathrm{{{\mathcal {K}}} }\,}}: {{\mathcal {S}}^n}\rightarrow {{\mathcal {S}}^n}\),

*adjoint*and

*Moore-Penrose pseudoinverse*,

*D*, i.e. the orthogonal projection onto \({{\mathcal {S}}}_H\). The range of \({{\,\mathrm{{{\mathcal {K}}} }\,}}\) is exactly \({{\mathcal {S}}}_H\) and the range of \({{\,\mathrm{{{\mathcal {K}}} }\,}}^{\dag }\) is the subspace \({{\mathcal {S}}}_C \). Moreover, \({{\,\mathrm{{{\mathcal {K}}} }\,}}({{\mathcal {S}}^n_+\,}) = {{{\mathcal {E}}}^n} \) and \({{\,\mathrm{{{\mathcal {K}}} }\,}}\) is an isomorphism between \({{\mathcal {S}}}_C \) and \({{\mathcal {S}}}_H\).

*r*with full column rank factorization \(PP^T\), then the rows of

*P*correspond to the points in \(\mathbb {R}^r\) with pairwise distances corresponding to the elements of

*D*. For more details, see e.g. [2, 11, 12, 21, 22].

## 2 **SDP** Formulation

We begin this section by formulating the **SLS** problem using the model and notation of [3]. We let *n* denote the number of sensors, \(p^1,\cdots ,p^n \in \mathbb {R}^r\) denotes their locations, and *r* is the *embedding dimension*.

### Assumption 2.1

- 1.
\(n \ge r+1\);

- 2.
\(\mathrm{int\,}{{\,\mathrm{{conv}}\,}}(p^1,\cdots , p^n) \ne \emptyset \);

- 3.
\(\sum _{i=1}^n p^i = 0\).

The first two items in Assumption 2.1 ensure that a signal can be uniquely recovered if we have accurate distance measurements. If the towers are positioned in a proper affine subspace of \(\mathbb {R}^r\), and the signal is not contained within this affine subspace, then there are multiple possible locations for the signal with the given distance measurements. We assume that such poor designs are avoided in our applications. The third assumption is made so that the sources are *centered* about the origin. This property leads to a cleaner exposition in the **NEDM** relaxations of Sect. 3.

*i*th sensor,

### 2.1 **GTRS**

**GTRS**is an optimization problem where the objective is a quadratic and there is a single two-sided quadratic constraint [29, 33]. Note that this class of problems also includes equality constraints. If we expand the squared norm term in

**SLS**and substitute using \(\Vert x\Vert ^2 = \alpha \) as in [3], we get the equivalent problem

**GTRS**.

### Theorem 2.2

**SLS**in (2.2) and the equivalent form given in (2.3). Then:

- 1.The problem \(\mathbf{SLS }\,\) is equivalent to$$\begin{aligned} (\mathbf{GTRS }) \qquad p_{\mathbf{SLS }}^*=\min \{ ||Ay - b ||^2 : y^T{\tilde{I}}y + 2{\tilde{b}}^Ty = 0, \ y\in \mathbb {R}^{r+1} \}. \end{aligned}$$(2.5)
- 2.
The rank of

*A*is \(r+1\) and the optimal value of**GTRS**is finite and attained. - 3.Strong duality holds for
**GTRS**, i.e.**GTRS**and its Lagrangian dual have a zero duality gap and the dual value is attained:$$\begin{aligned} p_{\mathbf{SLS }}^* = d_{\mathbf{SLS }}^*:= \max _\lambda \min _y \{ ||Ay - b ||^2 +\lambda ( y^T{\tilde{I}}y + 2{\tilde{b}}^Ty) \}. \end{aligned}$$(2.6)

### Proof

**SLS**can be rewritten as

**GTRS**follows immediately using the substitution \(y = (x^T,\alpha )^T\). For the second claim, note that by Assumption 2.1, Item 2, \({{\,\mathrm{{rank}}\,}}(P_T)=r\). Therefore, \((P_T)^Te=0\) implies that

*A*has full column rank, we conclude that \(A^TA\) is positive definite, and therefore the objective of

**GTRS**is strictly convex and coercive. Moreover, the constraint set is closed and thus the optimal value of

**GTRS**is finite and attained, as desired.

**GTRS**follows from [29], since this is a

*generalized trust region subproblem*. We now prove this for our special case. Note that

*y*.

We have confirmed the third equality. Now (2.8) is a convex quadratic optimization problem where the Slater constraint qualification holds. This implies that strong duality holds, i.e. we get (2.9) with attainment for some \(\lambda \ge 0\). Now if \(\lambda < 0\) in (2.9) then the Hessian of the objective is indefinite (by construction of \(\gamma \)) and the optimal value of the inner minimization problem is \(-\infty \). Thus since (2.9) is maximized with respect to \(\lambda \) in the outer optimization problem, we may remove the non-negativity constraint and obtain (2.10). The remaining lines are due to the definition of the Lagrangian dual and weak duality. Strong duality follows immediately. \(\square \)

The above Theorem 2.2 shows that even though **SLS** is a non-convex problem, it can be formulated as an instance of **GTRS** and satisfies strong duality. Therefore it can be solved efficiently using, for instance, the algorithm of [29]. Moreover, in the subsequent results we show that **SLS** is equivalent to its *semidefinite programming * (* SDP*) relaxation in (2.15), a convex optimization problem.

**SDP**approach with the approach used by Beck et al. [3]. In their approach they have to solve the following system obtained from the optimality conditions of

**GTRS**:

*hard case*results in \(A^T A + \lambda ^* {\tilde{I}}\) being singular for the optimal \(\lambda ^*\) and this can cause numerical difficulties. We note that in our

**SDP**relaxation, we need not differentiate between the

*‘hard case’*and

*‘easy case’*.

### 2.2 The semidefinite relaxation, **SDR**

**SLS**. We analyze the dual and the

**SDP**relaxation of

**GTRS**. By homogenizing the quadratic objective and constraint and using the fact that strong duality holds for the standard trust region subproblem [34], we obtain an equivalent formulation of the Lagrangian dual of

**GTRS**as an

**SDP**. We first define

**GTRS**may be obtained as follows:

**SDP**corresponding to the primal SDP problem, e.g. [39],

**SDR**. We define the map \(\rho : \mathbb {R}^{r+1} \rightarrow {{\mathcal {S}}} ^{r+2}\) as,

### Lemma 2.3

The map \(\rho \) is an isomorphism between the feasible sets of **GTRS** and **SDR**. Moreover, the objective value is preserved under \(\rho \), i.e. \(||Ay-b||^2 = \langle \bar{A}, \rho (y) \rangle \).

### Theorem 2.4

- 1.
The optimal values of

**GTRS**,**SDR**, and (2.14) are all equal, finite, and attained. - 2.
The matrix \(X^*\) is an extreme point of \(\Omega \) if, and only if, \(X^* = \rho (y^*)\) for some minimizer, \(y^*\), of

**GTRS**.

### Proof

**SDR**is a relaxation of

**GTRS**we get,

**GTRS**and (2.14) are attained. To see that the optimal value of

**SDR**is attained it suffices to show that (2.14) has a Slater point. Indeed, the feasible set of (2.14) consists of all \(\mu , s \in \mathbb {R}\) such that,

*s*so that \(b^Tb - s\) is sufficiently large.

*Z*is optimal for

**SDR**and

*X*and

*Y*are feasible for

**SDR**, we have

Since \(\Omega \) is a compact convex set it has an extreme point, say \(X^*\). Now \(X^*\) is also an extreme point of \(\mathcal {F}\), as the relation *face of* is transitive, i.e. *a face of a face is a face*. Moreover, since there are exactly two equality constraints in **SDR**, by Theorem 2.1 of [28], we have \({{\,\mathrm{{rank}}\,}}(X^*)(1 + {{\,\mathrm{{rank}}\,}}(X^*))/2 \le 2\). This equation is satisfied if, and only if, \({{\,\mathrm{{rank}}\,}}(X^*) = 1\). Equivalently, \(X^* = \rho (y^*)\) for some \(y^* \in \mathbb {R}^{r+1}\). Now, by Lemma 2.3 and the first part of this proof we have that \(y^*\) is a minimizer of **GTRS** .

**GTRS**. Then by Lemma 2.3, \(X^*:= \rho (y^*)\) is optimal for

**SDR**. To see that \(X^*\) is an extreme point of \(\Omega \), let \(Y,Z \in \Omega \) such that

*Y*and

*Z*are non-negative multiples of \(X^*\). But by feasibility, \(X^*_{r+2,r+2} = Y_{r+2,r+2} = Z_{r+2,r+2}\) and thus \(Y=Z=X^*\). So, by definition, \(X^*\) is an extreme point of \(\Omega \), as desired. \(\square \)

We have shown that the optimal value of **SLS** may be obtained by solving the *nice* convex problem **SDR**. Moreover, every extreme point of the optimal face of **SDR** can easily be transformed into an optimal solution of **SLS**. However, **SDR** is usually solved using an interior point method that is guaranteed to converge to a relative interior solution of \(\Omega \). In general, such a solution may not have rank 1. In the following corollary of Theorem 2.4 we address those instances for which the solution of **SDR** is readily transformed into a solution of **SLS**. For other instances, we present an algorithmic approach in Sect. 2.2.1.

### Corollary 2.5

- 1.
If

**GTRS**has a unique minimizer, say \(y^*\), then the optimal set of**SDR**is the singleton \(\rho (y^*)\). - 2.
If the optimal set of

**SDR**is a singleton, say \(X^*\), then \({{\,\mathrm{{rank}}\,}}(X^*) = 1\) and \(\rho ^{-1}(X^*)\) is the unique minimizer of**GTRS**.

### Proof

Let \(y^*\) be the unique minimizer of **GTRS** . By Theorem 2.4 we know that \(\rho (y^*)\) is an extreme point of \(\Omega \). Now suppose, for the sake of contradiction, that there exists \(X \ne \rho (y^*)\) in \(\Omega \). Since \(\Omega \) is a compact convex set it is the convex hull of its extreme points. Thus there exists an extreme point of \(\Omega \), say *Y*, that is distinct from \(\rho (y^*)\). By Theorem 2.4, we know that \(\rho ^{-1}(Y)\) is a minimizer of **GTRS** and by Lemma 2.3, \(\rho ^{-1}(Y) \ne y^*\), contradicting the uniqueness of \(y^*\).

For the converse, let \(X^*\) be the unique minimizer of **SDR**. Then \(X^*\) is the only extreme point of \(\Omega \) and consequently \(\rho ^{-1}(X^*)\) is the unique minimizer of **GTRS**, as desired. \(\square \)

#### 2.2.1 A purification algorithm

Suppose the optimal solution of (2.15) is \(\bar{X}\) with optimal value \(p_{\mathbf{SDR }\,\,}^*= \langle \bar{A},\bar{X}\rangle \) and \({{\,\mathrm{{rank}}\,}}(\bar{X}) = \bar{r}\) where \(\bar{r}>1\). Note that we can not obtain an optimal solution of **GTRS** from \(\bar{X}\) since the rank is too large. However, in this section we construct an algorithm that returns an extreme point of \(\Omega \) which, by Theorem 2.4, is easily transformed into an optimal solution of **GTRS**. We note that this does *not* require the extreme point to be an *exposed* extreme point.

**SDR**.

### Lemma 2.6

### Proof

### Theorem 2.7

Let \(\bar{X} \in {{\mathcal {S}}} ^{r+2}_+\) be an optimal solution to **SDR** . If \(\bar{X}\) is an input to Algorithm 2.1, then the algorithm terminates with at most \({{\,\mathrm{{rank}}\,}}(\bar{X}) - 1\le r+1\) calls to the while loop and the output, \(X^*\), is a rank 1 optimal solution of **SDR**.

### Proof

We remark that in many of our numerical tests the rank of \(\bar{X}\) was 2 or 3. Consequently, the purification process did not require many iterations.

## 3 **EDM** Formulation

**SLS**as an

**EDM**completion problem. Recall that the exact locations of the sensors (towers) are known, and that the tower-source distances are noisy. The corresponding

**EDM**restricted to the towers is denoted \(D_T\) and is defined by

**EDM**for the sensors and the source is

*Procrustes*problem.

**EDM**problem with fixed sensors is

**NEDMP**problem in (3.1) is indeed equivalent to \(\mathbf{SLS }\,\), i.e.

*X*corresponding to the sensors. Taking this approach, we obtain

*nearest Euclidean distance matrix with fixed sensors*(

*) problem,*

**NEDMF****SLS**(acting on the matrix variable) and the affine constraint restricts

*X*to those Gram matrices for which the block corresponding to the sensors has exactly the same distances as \(P_TP_T^T\). That is, if

**SLS**. Thus every feasible solution of (3.2), corresponds to a feasible solution of

**SLS**. The converse is trivially true and we conclude that (3.2) is equivalent to

**SLS**due to the rank constraint. We show in the subsequent sections that the relaxation where the rank and the linear constraints are dropped, may be used to solve the problem accurately in a large number of instances.

### 3.1 The relaxed **NEDM** problem

#### 3.1.1 Nearest Euclidean distance matrix formulation

**EDM**s with embedding dimension at most

*r*. The motivation behind this relaxation is the assumption that the distance measurements corresponding to the sensors are very accurate. Therefore, any minimizer of

**NEDM**will likely have the first

*n*points very near the sensors. As we show in the subsequent sections by introducing weights, we can obtain a solution arbitrarily close to that of (3.2).

**NEDM**is the rank constraint. A simpler problem is to first solve the unconstrained least squares problem and then to project the solution onto the set of positive semidefinite matrices with rank at most

*r*. This is equivalent to solving the inverse nearest

**EDM**problem:

*r*. By the Eckart–Young theorem, this projection is a rank

*r*matrix obtained by setting the \(n-r\) smallest eigenvalues (in magnitude) of \({{\,\mathrm{{{\mathcal {K}}} }\,}}^{\dag }(D_{T_c})\) to zero. In the following lemma we show that for sufficiently small noise, the negative eigenvalue is of small magnitude and hence the Eckart–Young rank

*r*projection is positive semidefinite. We denote by \(\overline{D} \in {{\mathcal {S}}} ^{n+1}\) the true

**EDM**of the sensors and the source, that is

### Theorem 3.1

The rank of \({{\,\mathrm{{{\mathcal {K}}} }\,}}^{\dagger }(D_{T_c})\) is at most \(r + 2\). Moreover, \({{\,\mathrm{{{\mathcal {K}}} }\,}}^{\dagger }(D_{T_c})\) has at most 1 negative eigenvalue with magnitude bounded above by \(\frac{\sqrt{2}}{2} ||J_{n+1} ||^2 ||\varepsilon ||\).

### Proof

*r*and at most \(r + 1\) positive eigenvalues (\(-J_{n+1}\overline{D}J_{n+1}\) is a positive semidefinite matrix with rank

*r*and \(-J_{n+1}QJ_{n+1}\) is positive semidefinite with rank at most 1); and the second term is negative semidefinite with at most one negative eigenvalue. Using the Cauchy–Schwartz inequality it can be shown that for \(X,Y \in {{\mathcal {S}}^n}\),

*P*is a projection of \(e_{n+1}\xi ^T + \xi e_{n+1}^T\) onto \(-{{\mathcal {S}}^{n+1}_+\,}\), we have

The following corollary follows immediately.

### Corollary 3.2

If \(||\varepsilon ||\) is sufficiently small, the optimal solution of **NEDMinv** is the rank *r* Eckart–Young projection of \({{\,\mathrm{{{\mathcal {K}}} }\,}}^{\dag }(D_{T_c})\).

#### 3.1.2 Weighted, facially reduced **NEDM**

While we have discarded the information pertaining to the locations of the sensors in relaxing the problem (3.2) to the problem **NEDM** , we still make use of the distances between the sensors. Thus, to some extent the locations of the sensors have an implicit effect on the optimal solution of **NEDM** and the approximation **NEDMinv** from the previous section. In this section we take greater advantage of the known distances between the sensors by restricting **NEDM** to a face of \({{\mathcal {S}}^{n+1}_+\,}\) by *facial reduction. *

**NEDM**, may actually be refined to say \(X \in {{\,\mathrm{face}\,}}(F_T,{{\mathcal {S}}} ^{n+1}_+)\) which is the following:

### Lemma 3.3

- 1.
\(\overline{W}_T\overline{W}_T^T\) exposes \({{\,\mathrm{face}\,}}(F_T,{{\mathcal {S}}} _{c,+}^{n+1})\),

- 2.
*W*exposes \({{\,\mathrm{face}\,}}(F_T,{{\mathcal {S}}} ^{n+1}_+)\).

### Proof

This statement is a special case of Theorem 4.13 of [16]. \(\square \)

*W*we have a ‘nullspace’ characterization of \({{\,\mathrm{face}\,}}(F_T,{{\mathcal {S}}} ^{n+1}_+)\). However, the ‘range space’ characterization is more useful in the context of semidefinite optimizaiton as it leads to dimension reduction, numerical stability, and strong duality. To this end, we consider any \((n+1)\times (r+1)\) matrix such that its columns form a basis for \({{\,\mathrm{null}\,}}(W)\). One such choice is,

*V*indeed form a basis for \({{\,\mathrm{null}\,}}(W)\), we first observe that \({{\,\mathrm{{rank}}\,}}(V) = r + 1\) and secondly we have,

*X*in

**NEDMP**by \(VRV^T\) for \(R\in {{\mathcal {S}}} ^{r+1}_+\). To simplify the notation, we define the composite map \({{\,\mathrm{{{\mathcal {K}}} }\,}}_V := {{\,\mathrm{{{\mathcal {K}}} }\,}}(V\cdot V^T)\). Moreover, we introduce a weight matrix to the objective and obtain the

*weighted facially reduced problem,*

*,*

**FNEDM****FNEDM**reduces to

**NEDMP**. On the other hand, when \(\alpha \) is very large, the solution has to satisfy the distance constraints for the sensors more accurately and in this case

**FNEDM**approximates (3.2). In fact, in Theorem 3.9 we prove that the solution to

**FNEDM**approaches that of (3.2) as \(\alpha \) increases.

We begin our analysis by proving that \(V_{\alpha }\) is attained.

### Lemma 3.4

- 1.
\({{\,\mathrm{null}\,}}(H_{\alpha } \circ {{\,\mathrm{{{\mathcal {K}}} }\,}}_V) = \{0\}\),

- 2.
\(f(R,\alpha )\) is strictly convex and coercive,

- 3.
the problem

**FNEDM**admits a minimizer.

### Proof

For Item 1, under the assumption that \(\alpha >0\), we have \(H_{\alpha } \circ {{\,\mathrm{{{\mathcal {K}}} }\,}}_V(R) = 0\) if, and only if, \({{\,\mathrm{{{\mathcal {K}}} }\,}}_V(R) = 0\). Recall that \({{\,\mathrm{{{\mathcal {K}}} }\,}}\) is one-to-one between the centered and hollow subspaces and \({{\,\mathrm{{{\mathcal {K}}} }\,}}(0) = 0\). By construction, \({{\,\mathrm{range}\,}}(V\cdot V^T)\) is a subset of the centered matrices. Hence \(H \circ {{\,\mathrm{{{\mathcal {K}}} }\,}}_V(R)=0\) if, and only if, \(VRV^T = 0\). Since *V* is full column rank, \(VRV^T = 0\) if, and only if, \(R=0\), as desired.

Now we turn to Item 2. The function \(f(R,\alpha )\) is quadratic with a positive semidefinite second derivative. Moreover, by Item 1, the second derivative is positive definite. Therefore \(f(R,\alpha )\) is strictly convex and coercive.

Finally, the feasible set of **FNEDM** is closed. Combining this observation with coercivity of the objective, from Item 2, we obtain Item 3. \(\square \)

We conclude this subsection by deriving the optimality conditions for the convex relaxation of **FNEDM** , which is obtained by dropping the rank constraint.

### Lemma 3.5

*R*is optimal for (3.12) if \({{\,\mathrm{{rank}}\,}}R \le r\).

### Proof

*R*is optimal if, and only if, \(\nabla f(R) \in ({{{\mathcal {S}}} ^{r+1}_+\,}- R)^+\), the nonnegative polar cone. This condition holds if, and only if, for all \(X\in {{{\mathcal {S}}} ^{r+1}_+\,}\) and \(\alpha >0\), we have

#### 3.1.3 Analysis of **FNEDM**

In this section we show that the optimal value of **FNEDM** is a lower bound for the optimal value of **SLS**. Moreover, the this lower bound becomes exact as \(\alpha \) is increased to \(+\infty \).

In the \(\mathbf{SLS }\) model, the distances between the towers are fixed, while in the \(\mathbf{NEDM }\,\) model (3.4), the distances between towers are free. The facial reduction model allows the distances between the towers to change but the towers can still be transformed back to their original positions by a square matrix \(Q \in \mathbb {R}^{r \times r}\). Note that *Q* does not have to be orthonormal, so it is possible that \(QQ^T \ne I\).

### Theorem 3.6

*V*as in (3.10), and let

*P*be a centered matrix with,

### Proof

*P*is centered,

*P*,

*r*rows of

*M*, then \(P_TQ = J_nT\). To this end, let \(\bar{J} = [J_n \quad 0]\) and observe that \(\bar{J} P = J_n T\). Moreover, since \(\bar{J}\) is centered, \(\bar{J} J_{n+1} = \bar{J}\). Then,

Theorem 3.6 indicates that when using the facial reduction model **FNEDM** we can use a least square approach to exactly get back the original positions of the sensors. This approach will be discussed in Sect. 3.2 along with the Procrustes approach.

In the following, we show that the optimal value of the problem in (3.12) is not greater than the optimal value of the \(\mathbf{SLS }\,\) estimates (2.2) or (3.2). We also prove that the solution to **FNEDM** approaches that of (3.2) as \(\alpha \) increases.

### Lemma 3.7

### Proof

That \(V_T\) is finite, follows from arguments analogous to those used in Lemma 3.4.

For the equality claim, it is clear that \(V_S \le V_T\). To show that \(V_S \ge V_T\), consider *X* that is feasible for (3.2). First we show that *X* may be assumed to be centered. To see this, consider \(\hat{X} = J_nXJ_n\). Note that \(\hat{X}\) is the orthogonal projection of *X* onto \({{\mathcal {S}}} _c\) and it can be verified that \(\hat{X} = {{\,\mathrm{{{\mathcal {K}}} }\,}}^{\dagger } {{\,\mathrm{{{\mathcal {K}}} }\,}}(X)\). Now it is clear that \(\hat{X} \succeq 0\) and that \({{\,\mathrm{{{\mathcal {K}}} }\,}}(\hat{X}) = {{\,\mathrm{{{\mathcal {K}}} }\,}}(X)\). Moreover, since \(J_n\) is singular we have, \({{\,\mathrm{{rank}}\,}}(\hat{X}) \le {{\,\mathrm{{rank}}\,}}(X)\). Therefore, \(\hat{X}\) is also feasible for (3.2) and provides the same objective value as *X*.

*Q*such that \(J_nT = P_TQ\). By Theorem 3.6 we have \(X \in V {{\mathcal {S}}} _+^{r+1} V^T\) and it follows that \(V_S \ge V_T\). \(\square \)

### Lemma 3.8

**EDM**with embedding dimension

*r*.

### Proof

*R*, if \(D_{T_c}\) is not an

**EDM**with embedding dimension

*r*.

### Theorem 3.9

For any \(\alpha > 0\), let \(R_{\alpha }\) denote the minimizer of **FNEDM** . Let \(\{\alpha _{\ell }\}_{\ell \in \mathbb {N}} \subset \mathbb {R}_{++}\) be a sequence of increasing numbers such that \(R_{\alpha _{\ell }} \rightarrow \bar{R}\) for some \(\bar{R} \in {{\mathcal {S}}} ^{r+1}\). Then \(V_{\alpha } \uparrow V_T\) and \(\bar{R}\) is a minimizer of (3.14).

### Proof

*h*. Since the limit in (3.16) exists we get,

#### 3.1.4 Solving **FNEDM**

**FNEDM**and the eigenvalues of \(R_{LS}\). In general the Moore–Penrose inverse may be difficult to obtain, however, the following result implies that \(R_{LS}\) may be derived efficiently and it is the

*unique*minimizer of

*f*.

### Lemma 3.10

*f*and

### Proof

That \(R_{LS}\) is the unique minimizer of *f* follows from strict convexity as in Item 2 of Lemma 3.4. Moreover, by Item 1 of Lemma 3.4, we have \({{\,\mathrm{null}\,}}(H_{\alpha } \circ {{\,\mathrm{{{\mathcal {K}}} }\,}}_V) = \{0\}\) which implies that \((H_{\alpha } \circ {{\,\mathrm{{{\mathcal {K}}} }\,}}_V)^{\dagger }\) is the left inverse. The desired expression for \(R_{LS}\) is obtained by substituting the left inverse into (3.19). \(\square \)

Note that \((H_{\alpha } \circ {{\,\mathrm{{{\mathcal {K}}} }\,}}_V)^*(H_{\alpha } \circ {{\,\mathrm{{{\mathcal {K}}} }\,}}_V)\) admits an \(r\times r\) matrix representation. Thus if *r* is small, as in many applications, the inverse of \((H_{\alpha } \circ {{\,\mathrm{{{\mathcal {K}}} }\,}}_V)^*(H_{\alpha } \circ {{\,\mathrm{{{\mathcal {K}}} }\,}}_V)\), and consequently \(R_{LS}\), may be obtained efficiently.

**FNEDM**.

**Case I**\(R_{LS} \succeq 0 \) and \({{\,\mathrm{{rank}}\,}}(R_{LS}) \le r\).**Case II**\(R_{LS} \notin {{\mathcal {S}}} ^{r+1}_+\).**Case III**\(R_{LS} \succ 0\).

**Case I**, we have that \(R_{LS}\) is the unique minimizer of

**FNEDM**. In this case

**FNEDM**reduces to an unconstrained convex optimization problem. Moreover, we have a closed form solution for the minimizer, \(R_{LS}\). In

**Case II**, the minimizer of

**FNEDM**may also be obtained through a convex relaxation as is indicated by the following result.

### Theorem 3.11

Let \(R^{\star }\) denote the minimizer of the relaxation of **FNEDM** where the rank constraint is removed. If \(R_{LS} \notin {{\mathcal {S}}} ^{r+1}_+\), then \(R^{\star }\) is a minimizer of **FNEDM** .

### Proof

Let \(R^{\star }\) denote the optimal solution of \(\mathbf{FNEDM }\,\) without the rank constraint. Note that \(R^{\star }\) exists by arguments analogous to those in Lemma 3.4. If \({{\,\mathrm{{rank}}\,}}(R^{\star }) \le r\), then clearly \(R^{\star }\) is a minimizer of **FNEDM** . Thus we may assume that \(R^{\star } \succ 0\).

Since \(R_{LS}\) is the unique minimizer of *f*, we have \(f(R_{LS}) < f(R^{\star })\). Moreover, by strict convexity of *f*, every matrix *R* in the relative interior of the line segment \([R_{LS},R^{\star }]\) satisfies \(f(R) < f(R^{\star })\). Now since \(R^{\star } \succ 0\) there exists \(\bar{R} \in {{\,\mathrm{{relint}}\,}}[R_{LS},R^{\star }] \cap {{\mathcal {S}}} ^{r+1}_+\). Then, \(\bar{R}\) is feasible for the relaxation of **FNEDM** where the rank constraint is removed. However, \(f(\bar{R}) < f(R^{\star })\), contradicting the optimality of \(R^{\star }\). \(\square \)

**Case III**we are motivated by the primal-dual approach of [30, 31] and the penalty approach of [19, 30, 31]. Let \(h = [1,\cdots , \alpha ]^T\), we notice that \(H_{\alpha }\circ Y = h h^T \circ Y = {{\,\mathrm{{Diag}}\,}}(h) Y {{\,\mathrm{{Diag}}\,}}(h)\) if \({{\,\mathrm{{diag}}\,}}(Y) = 0\). Let \(T = {{\,\mathrm{{Diag}}\,}}(h)\), it is easy to see that (3.12) is equivalent to the problem:

### Lemma 3.12

### Proof

*X*. Since \(\mathcal {K}_T^{n+1}(r)\) is a cone, the ray \(\theta \prod (X)\) for all \(\theta \ge 0\) is contained in the set \(\mathcal {K}_T^{n+1}(r)\). Moreover this ray is convex and \(\prod (X)\) is the nearest point to

*X*from this ray. Now we can use orthogonality: \(\prod (X) - X\) is orthogonal to \(\prod (X)-0\). Then the triangle inequality follows:

In [30, 31] it is shown that the Lagrangian dual has compact level sets and therefore the optimal value is finite and attained. The dual problem (3.28) can be solved by the semi-smooth Newton approach proposed in [30].

In [30, 31], the authors proposed a rank majorization approach where strong duality is guaranteed if the penalty function goes to zero. The approach can be readily modified to replace the diagonal constraint by the linear constraint \({{\mathcal {B}}\,}\) and to include the diagonal weight matrix *T*. The strong duality result and global optimal condition can also be carried out to our problem (3.21). The drawback of this approach is the slow convergence when *n* is large. Therefore, in our facial reduction model we prefer to stay in \({{\mathcal {S}}} ^{r+1}\) rather than \({{\mathcal {S}}} ^{n+1}\) since the dimension is lower. Hence we develop a rank majorization approach in \({{\mathcal {S}}} ^{r+1}\) in the following:

*p*is non-negative over the positive semidefinite matrices and

*p*is an appropriate penalty function for the rank constraint of

**FNEDM**. Now we consider the penalized version of

**FNEDM**,

*majorization*approach guarantees convergence to a matrix satisfying the first order necessary conditions for

**PNEDM**, i.e. a stationary point. See for instance [35, 36].

*p*is majorized by its linear approximation, since it is concave. In the algorithm, \(\partial p(R)\) denotes the subdifferential of

*p*at

*R*. Thus at every iterate, the convex subproblem (3.31) is solved to obtain the next iterate.

### Theorem 3.13

Suppose Algorithm 3.1 converges to a stationary point \(\bar{R}\), and that \({{\,\mathrm{{rank}}\,}}(\bar{R}) = r\). Then \(\bar{R}\) is a global minimizer of **FNEDM** restricted to \({{\,\mathrm{face}\,}}(\bar{R})\).

### Proof

*Z*and \(V^T V = I\). Let \(V = [V_1, V_2]\) with the columns of \(V_1\) being the eigenvectors corresponding to \(\lambda _1,\ldots ,\lambda _r\). We have

#### 3.1.5 Identifying outliers using \(l_1\) minimization and facial reduction

In this section, we address the issue of unequal noise, where a few distance measurements are outliers, i.e. much more inaccurate than others. We use \(l_1\) norm minimization to try and identify the outliers, and remove them to obtain a more stable problem. We assume that we have many more towers available than is necessary, so that removal of a few outliers leaves us with towers that still satisfy Assumption 2.1.

**SDP**cone. Let \(z := {{\,\mathrm{{svec}}\,}}(R)\) for \(R \in {{\mathcal {S}}} ^{r+1}\). Abusing our previous notation, let \(b := {{\,\mathrm{{svec}}\,}}(H_{\alpha } \circ D_{T_c})\) and let

*A*denote the matrix representation of \(H_{\alpha }\circ {{\,\mathrm{{{\mathcal {K}}} }\,}}_V\). Then \(z \in \mathbb {R}^{(r+1)(r+2)/2}\) and \(b \in \mathbb {R}^{n(n+1)/2}\). In practice,

*n*is much larger than \(r + 1\), so

*A*will have more rows than columns. In other words, we have an overdetermined system. Under this new notation, problem (3.12) is equivalent to,

*N*be a matrix such that \({{\,\mathrm{range}\,}}(A) = {{\,\mathrm{null}\,}}(N)\). Then \(\delta +b = Az\) if, and only if, \(\delta + b \in {{\,\mathrm{null}\,}}(N)\). Therefore the constraint, \(Az - b = \delta \) is equivalent to \(N \delta = -\,Nb\) which is exactly the compressed sensing constraint.

The problem (3.34) differs from the classical compressed sensing model in the positive semidefinite constraint. However, in our numerical tests, we have found that adding the positive semidefinite constraint greatly increases the success rate in identifying outliers. In compressed sensing, If the matrix *N* satisfy the so-called *restricted isometry property*, then the sparse signal can be recovered exactly [7, Theorem 1.1]. However, there are no practical algorithms available right now to check if a given matrix satisfies the restricted isometry property. If \(\delta _0\) is the solution to (3.34) and most of the elements of \(\delta _0\) are 0, then the non-zero elements indicate the outlier measurements.

*b*are exact and a few have large error. Now let us revert to the original assumption of this section: that most elements of

*b*are slightly inaccurate and few elements are very inaccurate. If the positive semidefinite constraint is ignored, then the identification of outliers is guaranteed to be accurate assuming that

*N*satisfies the restricted isometry property. To be specific, if \(\delta ^{\#}\) represents the optimal solution of (3.34) without the positive semidefinite constraint, then \(||\delta ^{\#} - \delta _0||_{l_2} \le C_S \cdot \epsilon \) where \(C_S\) and \(\epsilon \) are small constants [6, 7]. The specifics for our outlier-detection algorithm are stated in Algorithm 3.2.

### 3.2 Recovering source position from gram matrix

**EDM**from our data, we need to rotate the sensors back to their original positions in order to recover the position of the source. This is done by solving a Procrustes problem. That is, suppose that the, appropriately partitioned, final

**EDM**, corresponding Gram matrix and points are,

## 4 Numerical results

*D*is the generated

**EDM**and \(\varepsilon \in U(-\eta ,\eta )\). The outliers are obtained by multiplying (4.1) by another factor \(\theta \) for a small subset of the indices.

*c*, and the location obtained using method

*M*, denoted \(c_M\), is given by

*n*. For each pair \((n,\eta )\), one hundred instances are solved.

The methods in the tables are labelled according to the models with some additional prefixes. To be specific, the *L* and *P* prefixes represent the different ways used to obtain the position of the source, *c*. By *L* we denote the least square approach of (3.37) and *P* represents the Procrustes approach in (3.36). We choose \(\alpha = 1\) in **FNEDM** and the constant \(\gamma \) for **PNEDM** in (3.30) is chosen to be 1000.

The mean relative error \(c^M_{re}\) of 100 simulations for varying amount of sensors and error factors with no outliers for dimension \(r = 3\)

Error factor \(\eta \) | \(\eta \) = 0.002 | \(\eta \)= 0.02 | \(\eta \) = 0.2 | ||||||
---|---|---|---|---|---|---|---|---|---|

\(\#\) Sensors | 5 | 10 | 15 | 5 | 10 | 15 | 5 | 10 | 15 |

L-NEDM | 0.0045 | 0.0014 | 0.0010 | 0.0408 | 0.0140 | 0.0120 | 0.3550 | 0.1466 | 0.1153 |

P-NEDM | 0.0025 | 0.0013 | 0.0010 | 0.0231 | 0.0133 | 0.0117 | 0.2813 | 0.1385 | 0.1171 |

SDR | 0.0024 | 0.0014 | 0.0010 | 0.0223 | 0.0137 | 0.0119 | 0.2739 | 0.1373 | 0.1164 |

L-FNEDM | 0.0042 | 0.0013 | 0.0010 | 0.0356 | 0.0141 | 0.0119 | 0.2910 | 0.1395 | 0.1061 |

P-FNEDM | 0.0024 | 0.0013 | 0.0010 | 0.0237 | 0.0134 | 0.0118 | 0.2623 | 0.1360 | 0.1088 |

The mean relative error \(c^M_{re}\) of 100 simulations for varying amount of sensors and error factors with no outliers for dimension \(r = 3\)

Error factor \(\eta \) | \(\eta \) = 0.005 | \(\eta \)= 0.05 | \(\eta \) = 0.15 | ||||||
---|---|---|---|---|---|---|---|---|---|

\(\#\) Sensors | 5 | 10 | 15 | 5 | 10 | 15 | 5 | 10 | 15 |

L-NEDM | 0.0101 | 0.0033 | 0.0027 | 0.0970 | 0.0328 | 0.0262 | 0.2473 | 0.1037 | 0.0786 |

P-NEDM | 0.0070 | 0.0031 | 0.0027 | 0.0610 | 0.0320 | 0.0262 | 0.1925 | 0.1041 | 0.0760 |

SDR | 0.0071 | 0.0031 | 0.0027 | 0.0576 | 0.0322 | 0.0261 | 0.1933 | 0.1030 | 0.0779 |

L-FNEDM | 0.0090 | 0.0032 | 0.0026 | 0.0800 | 0.0311 | 0.0255 | 0.2151 | 0.1001 | 0.0769 |

P-FNEDM | 0.0069 | 0.0031 | 0.0027 | 0.0536 | 0.0310 | 0.0258 | 0.1914 | 0.1000 | 0.0772 |

From Tables 1 and 2 we can see generally P-FNEDM has the smallest error, and occasionally L-FNEDM is better. Also we can see that as the number of towers *n* increases, the relative error \(c^M_{re}\) decreases which is expected as we have more sensors, the location of the source should be more accurate.

To compare the overall performance of all the methods, we use the well known *performance profiles* [14]. The approach is outlined below.

*M*. We denote this

*performance ratio*,

*M*is best for the pair \((n,\eta )\). In general, smaller values of \(r_{n,\eta ,M}\) indicate better performance. The function \(\psi _M(\tau )\) measures how many pairs \((n,\eta )\) were solved with a performance ratio of \(\tau \) or better. The function is monotonically non-decreasing and larger values are better.

The performance profiles can be seen in Fig. 1a and b, the P-FNEDM approach has the best performance over all 5 methods. Also using the Procrustes approach (3.36) is better than using the least squares approach (3.37). Allowing the sensors to move in **FNEDM** model is better than fixing the sensors in **SDR** or making the sensors completely free in **NEDM** for recovering the location of the source.

**FNEDM**, the outliers are detected and removed using the \(\ell _1\) norm approach described in Sect. 3.1.5. We report the results with outliers added in the following Tables 3 and 4.

The mean relative error \(c^M_{re}\) of 100 simulations for varying amount of sensors and error factors with 1 outlier for dimension \(r = 3\)

Error factor \(\eta \) | \(\eta \) = 0.001 | \(\eta \)= 0.01 | \(\eta \) = 0.1 | ||||||
---|---|---|---|---|---|---|---|---|---|

\(\#\) Sensors | 7 | 12 | 16 | 7 | 12 | 16 | 7 | 12 | 16 |

L-RNEDM | 0.8076 | 0.6189 | 0.4579 | 0.8695 | 0.6376 | 0.4738 | 0.8006 | 0.5935 | 0.4068 |

P-RNEDM | 1.0319 | 0.6789 | 0.4755 | 1.0819 | 0.6869 | 0.4677 | 0.9939 | 0.6374 | 0.4312 |

SDR | 1.0618 | 0.7150 | 0.5398 | 1.0825 | 0.6981 | 0.5343 | 0.9968 | 0.6732 | 0.4983 |

L-FNEDM | 0.1358 | 0.0546 | 0.0388 | 0.1556 | 0.0525 | 0.0402 | 0.2308 | 0.0799 | 0.0710 |

P-FNEDM | 0.1364 | 0.0546 | 0.0388 | 0.1588 | 0.0527 | 0.0401 | 0.2150 | 0.0799 | 0.0708 |

The mean relative error \(c^M_{re}\) of 100 simulations for varying amount of sensors and error factors with 2 outliers for dimension \(r = 3\)

Error factor \(\eta \) | \(\eta \) = 0.001 | \(\eta \)= 0.01 | \(\eta \) = 0.1 | ||||||
---|---|---|---|---|---|---|---|---|---|

\(\#\) Sensors | 7 | 12 | 16 | 7 | 12 | 16 | 7 | 12 | 16 |

L-RNEDM | 0.7035 | 0.5299 | 0.3909 | 0.7686 | 0.5186 | 0.3905 | 0.7219 | 0.5296 | 0.4271 |

P-RNEDM | 0.9533 | 0.5838 | 0.4488 | 0.9160 | 0.5817 | 0.4371 | 0.9324 | 0.6183 | 0.4739 |

SDR | 0.9337 | 0.5386 | 0.4623 | 0.8905 | 0.5600 | 0.4390 | 0.8927 | 0.5917 | 0.4663 |

L-FNEDM | 0.5777 | 0.1032 | 0.0571 | 0.5637 | 0.0961 | 0.0560 | 0.5860 | 0.1409 | 0.0878 |

P-FNEDM | 0.5740 | 0.1033 | 0.0561 | 0.5388 | 0.0925 | 0.0544 | 0.5619 | 0.1380 | 0.0864 |

From Table 3 and 4 we can see clearly that when outliers are added, the \(\mathbf{FNEDM }\,\) outperforms both **SDR** and **NEDM** with a big improvement, as the outliers can be removed. It is also consistent with our previous conclusion that using the Procrustes approach (3.36) is better than using the least squares approach (3.37).

## 5 Conclusion

We showed that the **SLS** formulation of the single source localization problem is inherently convex, by considering the semidefinite relaxation, **SDR**, of the **GTRS** formulation. The extreme points of the optimal set of **SDR** correspond exactly to the optimal solutions of the **SLS** formulation and these extreme points can be obtained by solving no more than \(r+1\) convex optimization problems.

We also analyzed several **EDM** based relaxations of the **SLS** formulation and introduced the weighted facial reduction model **FNEDM**. The optimal value of **FNEDM** was shown to converge to the optimal value of **SLS** by increasing \(\alpha \). In our numerical tests, we showed that our newly proposed model **FNEDM** performs the best for recovering the location of the source. Without any outliers present, the performance of each method improves as the number of towers increases. This is expected since more information is available. All the methods tend to perform similarly as the number of towers increases but the facial reduction model, **FNEDM**, using the Procrustes approach performs the best.

Finally, we used the \(\ell _1\) norm approach in Algorithm 3.2, to remove outlier measurements. In Tables 3 and 4 we demonstrate the effectiveness of this approach.

## Notes

### Acknowledgements

Open access funding provided by Royal Institute of Technology.

## References

- 1.Alfakih, A.Y.: Euclidean Distance Matrices and Their Applications in Rigidity Theory. Springer, Cham (2018)CrossRefGoogle Scholar
- 2.Alfakih, A.Y., Khandani, A., Wolkowicz, H.: Solving Euclidean distance matrix completion problems via semidefinite programming. Comput. Optim. Appl.
**12**(1–3), 13–30 (1999). A tribute to Olvi MangasarianMathSciNetCrossRefGoogle Scholar - 3.Beck, A., Stoica, P., Li, J.: Exact and approximate solutions of source localization problems. IEEE Trans. Signal Process.
**56**(5), 1770–1778 (2008)MathSciNetCrossRefGoogle Scholar - 4.Beck, A., Teboulle, M., Chikishev, Z.: Iterative minimization schemes for solving the single source localization problem. SIAM J. Optim.
**19**(3), 1397–1416 (2008)MathSciNetCrossRefGoogle Scholar - 5.Borg, I., Groenen, P.: Modern multidimensional scaling: theory and applications. J. Educ. Meas.
**40**(3), 277–280 (2003)CrossRefGoogle Scholar - 6.Candes, E., Rudelson, M., Tao, T., Vershynin, R.: Error correction via linear programming. In: Proceedings of HTE 2005 46th Annual EIII Symposium on Foundations of Computer Science, (FOCS’o5), pp. 1–14. IEEE, New York (2005)Google Scholar
- 7.Candès, E.J., Romberg, J.K., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math.
**59**(8), 1207–1223 (2006)MathSciNetCrossRefGoogle Scholar - 8.Cheung, K.W., So, H.-C., Ma, W.-K., Chan, Y.-T.: Least squares algorithms for time-of-arrival-based mobile location. IEEE Trans. Signal Process.
**52**(4), 1121–1130 (2004)MathSciNetCrossRefGoogle Scholar - 9.Cox, T.F., Cox, M.A.: Multidimensional Scaling. Chapman and hall/CRC, Boca Raton (2000)CrossRefGoogle Scholar
- 10.Crippen, G.M., Havel, T.F.: Distance Geometry and Molecular Conformation, vol. 74. Research Studies Press Taunton, Taunton (1988)zbMATHGoogle Scholar
- 11.Critchley, F.: Dimensionality theorems in multidimensional scaling and hierarchical cluster analysis. In: Data Analysis and Informatics (Versailles, 1985), pp. 45–70. North-Holland, Amsterdam (1986)Google Scholar
- 12.Dattorro, J.: Convex optimization & Euclidean distance geometry. Lulu. com (2010)Google Scholar
- 13.Ding, Y., Krislock, N., Qian, J., Wolkowicz, H.: Sensor network localization, Euclidean distance matrix completions, and graph realization. Optim. Eng.
**11**(1), 45–66 (2010)MathSciNetCrossRefGoogle Scholar - 14.Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program.
**91**(2, Ser. A), 201–213 (2002)MathSciNetCrossRefGoogle Scholar - 15.Drusvyatskiy, D., Krislock, N., Cheung Voronin, Y.-L., Wolkowicz, H.: Noisy Euclidean distance realization: robust facial reduction and the Pareto frontier. SIAM J. Optim.
**27**(4), 2301–2331 (2017)MathSciNetCrossRefGoogle Scholar - 16.Drusvyatskiy, D., Pataki, G., Wolkowicz, H.: Coordinate shadows of semidefinite and Euclidean distance matrices. SIAM J. Optim.
**25**(2), 1160–1178 (2015)MathSciNetCrossRefGoogle Scholar - 17.Drusvyatskiy, D., Wolkowicz, H.: The many faces of degeneracy in conic optimization. Found. Trends
^{®}Optim.**3**(2), 77–170 (2017)CrossRefGoogle Scholar - 18.Fang, H., O’Leary, D.P.: Euclidean distance matrix completion problems. Optim. Methods Softw.
**27**(4–5), 695–717 (2012)MathSciNetCrossRefGoogle Scholar - 19.Gao, Y., Sun, D.: A majorized penalty approach for calibrating rank constrained correlation matrix problems. Technical Report (2010)Google Scholar
- 20.Golub, G.H., Van Loan, C.F.: Matrix Computations, 3rd edn. Johns Hopkins University Press, Baltimore (1996)zbMATHGoogle Scholar
- 21.Gower, J.C.: Properties of Euclidean and non-Euclidean distance matrices. Linear Algebra Appl.
**67**, 81–97 (1985)MathSciNetCrossRefGoogle Scholar - 22.Hayden, T.L., Wells, J., Liu, W.M., Tarazaga, P.: The cone of distance matrices. Linear Algebra Appl.
**144**, 153–169 (1991)MathSciNetCrossRefGoogle Scholar - 23.Hiriart-Urruty, J.-B., Lemaréchal, C.: Fundamentals of Convex Analysis, Grundlehren Text Editions. Springer, Berlin (2001). Abridged version of it Convex analysis and minimization algorithms. I [Springer, Berlin, 1993; MR1261420 (95m:90001)] and it II [ibid.; MR1295240 (95m:90002)]CrossRefGoogle Scholar
- 24.Koshima, H., Hoshen, J.: Personal locator services emerge. IEEE Spectr.
**37**(2), 41–48 (2000)CrossRefGoogle Scholar - 25.Krislock, N., Wolkowicz, H.: Euclidean distance matrices and applications. In: Handbook on Semidefinite, Cone and Polynomial Optimization, Number 2009-06 in International Series in Operations Research & Management Science, pp. 879–914. Springer, Berlin (2011)Google Scholar
- 26.Kundu, T.: Acoustic source localization. Ultrasonics
**54**(1), 25–38 (2014)CrossRefGoogle Scholar - 27.Liberti, L., Lavor, C., Maculan, N., Mucherino, A.: Euclidean distance geometry and applications. SIAM Rev.
**56**(1), 3–69 (2014)MathSciNetCrossRefGoogle Scholar - 28.Pataki, G.: On the rank of extreme matrices in semidefinite programs and the multiplicity of optimal eigenvalues. Math. Oper. Res.
**23**(2), 339–358 (1998)MathSciNetCrossRefGoogle Scholar - 29.Pong, T.K., Wolkowicz, H.: The generalized trust region subproblem. Comput. Optim. Appl.
**58**(2), 273–322 (2014)MathSciNetCrossRefGoogle Scholar - 30.Qi, H.-D.: A semismooth Newton method for the nearest Euclidean distance matrix problem. SIAM J. Matrix Anal. Appl.
**34**(1), 67–93 (2013)MathSciNetCrossRefGoogle Scholar - 31.Qi, H.-D., Yuan, X.: Computing the nearest Euclidean distance matrix with low embedding dimensions. Math. Program.
**147**(1), 351–389 (2014)MathSciNetCrossRefGoogle Scholar - 32.Schoenberg, I.J.: Metric spaces and positive definite functions. Trans. Am. Math. Soc.
**44**(3), 522–536 (1938)MathSciNetCrossRefGoogle Scholar - 33.Stern, R., Wolkowicz, H.: Trust region problems and nonsymmetric eigenvalue perturbations. SIAM J. Matrix Anal. Appl.
**15**(3), 755–778 (1994)MathSciNetCrossRefGoogle Scholar - 34.Stern, R., Wolkowicz, H.: Indefinite trust region subproblems and nonsymmetric eigenvalue perturbations. SIAM J. Optim.
**5**(2), 286–313 (1995)MathSciNetCrossRefGoogle Scholar - 35.Tao, P.D., An, L.T.H.: Convex analysis approach to dc programming: theory, algorithms and applications. Acta Math. Vietnam.
**22**(1), 289–355 (1997)MathSciNetzbMATHGoogle Scholar - 36.Tao, P.D., An, L.T.H.: The dc (difference of convex functions) programming and dca revisited with dc models of real world nonconvex optimization problems. Ann. Oper. Res.
**133**(1–4), 23–46 (2005)MathSciNetzbMATHGoogle Scholar - 37.Tunçel, L.: Polyhedral and Semidefinite Programming Methods in Combinatorial Optimization. Fields Institute Monographs, vol. 27. American Mathematical Society, Providence, RI (2010)Google Scholar
- 38.Warrior, J., McHenry, E., McGee, K.: They know where you are [location detection]. IEEE Spectr.
**40**(7), 20–25 (2003)CrossRefGoogle Scholar - 39.Wolkowicz, H., Saigal, R., Vandenberghe, L. (eds.): Handbook of Semidefinite Programming. International Series in Operations Research & Management Science, vol. 27. Kluwer Academic Publishers, Boston, MA (2000)Google Scholar

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.