Skip to main content

GDOP Analysis for Positioning Design

  • Chapter
  • First Online:
Wireless Positioning: Principles and Practice

Part of the book series: Navigation: Science and Technology ((NASTECH))

Abstract

A practical indoor system used for tracking people is typically based on both fixed anchor nodes and mobile nodes attached to people or other objects. In this mode of operation, mobile nodes whose position coordinates have been previously determined can contribute to the position determination of other mobile nodes whose position is sought. While a design can specify the location of the anchor nodes for adequate position performance, the mobile nodes’ positions can only be specified in a statistical sense.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    This assumes the mobile node position is not near the edge of the coverage area.

References

  • Bulusu N, Heidemann J, Estrin D (2001) Adaptive beacon placement. In: Proceedings of international conference on distributed computing systems, Phoenix, Arizona, USA, April 2001, pp 489–498

    Google Scholar 

  • Gentile C, Kik A (2006) An evaluation of ultra wideband technology for indoor ranging. In: Proceedings of IEEE global telecommunications conference (GLOBECOM), San Francisco, California, USA, Nov 2006, pp 1–6

    Google Scholar 

  • Hedley M, Humphrey D, Ho P (2008) System and algorithms for accurate indoor tracking using low-cost hardware. In: Proceedings of IEEE/IOA position, location and navigation symposium, Monterey, California, USA, May 2008, pp 633–640

    Google Scholar 

  • Humphrey D, Hedley M (2008) Super-resolution time of arrival for indoor localization. In: Proceedings of international conference on communications (ICC), Beijing, China, May 2008, pp 3286–3290

    Google Scholar 

  • Lee HB (1975) A novel procedure for assessing the accuracy of hyperbolic multilateration systems. IEEE Trans Aerosp Electron Syst 11(1):2–15

    Article  MathSciNet  Google Scholar 

  • Meyer C (2000) Matrix analysis and applied linear algebra. Society for Industrial and Applied Mathematics

    Google Scholar 

  • Miao H, Yu K, Juntti M (2007) Positioning for NLOS propagation: algorithm derivations and Cramer-Rao bounds. IEEE Trans Veh Technol 56(5):2568–2580

    Article  Google Scholar 

  • Sathyan T, Humphrey D, Hedley M (2011) A system and algorithms for accurate radio localization using low-cost hardware. IEEE Trans Soc Man Cybern—Part C 41(2):211–222

    Google Scholar 

  • Sharp I, Yu K, Guo Y (2009) GDOP analysis for positioning system design. IEEE Trans Veh Technol 58(7):3371–3382

    Article  Google Scholar 

  • Schmidt R (1986) Multiple emitter location and signal parameter estimation. IEEE Trans Antenna Propag 34(3):276–280

    Article  Google Scholar 

  • Spirito M (2001) On the accuracy of cellular mobile station location estimation. IEEE Trans Veh Technol 50(3):674–685

    Article  Google Scholar 

  • Teunissen P (2002) Adjustment theory—an introduction. Series on mathematical geodesy and positioning. VSSD

    Google Scholar 

  • Torieri D (1984) Statistical theory of passive location systems. IEEE Trans Aerosp Electron Syst 20(2):183–198

    Article  Google Scholar 

  • Yick J, Bharathidasan A, Pasternack G, Mukheriee B, Ghosal D (2004) Optimizing placement of beacons and data loggers in sensor network—a case study. IEEE wireless communications and networking conference (WCNC), Atlanta, Georgia, USA, March 2004, pp 2486–2491

    Google Scholar 

  • Yu K (2007) 3-D localization error analysis in wireless networks. IEEE Trans Wirel Commun 6(10):3473–3481

    Google Scholar 

  • Yu K, Guo Y (2008) Improved positioning algorithms for nonline-of-sight environments. IEEE Trans Veh Technol 57(4):2342–2353

    Article  Google Scholar 

  • Yu K, Sharp I, Guo Y (2009) Ground-based wireless positioning. Wiley-IEEE Press, Chippenham

    Book  Google Scholar 

  • Zhu J (1992) Calculation of geometric dilution of precision. IEEE Trans Aerosp Electron Syst 28(3):893–895

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ian Sharp .

Appendices

Annex A: Expectation Calculation for the Inverse of \( \,{\varvec{\Phi}}\, \) Matrix

In this Annex the details of deriving the inverse of the \( \,{\varvec{\Phi}}\, \) matrix in (14.9) are provided. The \( \,{\varvec{\Phi}}\, \) matrix is the sum of two matrixes, a mean component plus a random component which has zero mean. In general, there is no simple expression for the inverse of the sum of two matrices, so that calculating the inverse requires extensive analysis. Ideally the inverse should be expressed in the same form as (14.9), namely a mean component matrix plus a zero-mean random component. The following analysis provides an approximation to such a solution.

Because the mean components of the \( \,{\varvec{\Phi}}\, \) matrix are much larger than the random components as seen from (14.15a) and (14.15b), one possible method involves the application of a Neumann series (Meyer 2000) for the inverse of the sum of two specific matrices, namely

$$ \;({\mathbf{I}} - {\mathbf{A}})^{ - 1} = {\mathbf{I}} + \sum\limits_{n = 1}^{\infty } {{\mathbf{A}}^{n} } $$
(14.39)

Applying (14.39) to the inverse of the \( \,{\varvec{\Phi}}\, \) matrix yields

$$ \begin{aligned} \;({\varvec{\Phi}}_{0} + {\varvec{\Phi}}_{\varepsilon } )^{ - 1} = &\,[{\varvec{\Phi}}_{0} ({\mathbf{I}} - ( - {\varvec{\Phi}}_{0}^{ - 1} {\varvec{\Phi}}_{\varepsilon } ))]^{ - 1} \\ = &\,{\varvec{\Phi}}_{0}^{ - 1} + \left[ {\sum\limits_{n = 1}^{\infty } {( - 1)^{n} ({\varvec{\Phi}}_{0}^{ - 1} {\varvec{\Phi}}_{\varepsilon } )^{n} } } \right]{\varvec{\Phi}}_{0}^{ - 1} \\ = &\,({\varvec{\Phi}}_{0}^{ - 1} + {\varvec{\Delta}}) - {\varvec{\Phi}}_{0}^{ - 1} {\varvec{\Phi}}_{\varepsilon } {\varvec{\Phi}}_{0}^{ - 1} \\ \end{aligned} $$
(14.40a)

where

$$ \;{\varvec{\Delta}} = \left[ {\sum\limits_{n = 2}^{\infty } {( - 1)^{n} ({\varvec{\Phi}}_{0}^{ - 1} {\varvec{\Phi}}_{\varepsilon } )^{n} } } \right]{\varvec{\Phi}}_{0}^{ - 1} $$
(14.40b)

The first order solution ignores the (small) \( \,{\varvec{\Delta}}\, \) term so that

$$ \;({\varvec{\Phi}}_{0} + {\varvec{\Phi}}_{\varepsilon } )^{ - 1} \approx {\varvec{\Phi}}_{0}^{ - 1} - {\varvec{\Phi}}_{0}^{ - 1} {\varvec{\Phi}}_{\varepsilon } {\varvec{\Phi}}_{0}^{ - 1} $$
(14.41)

Now consider the calculation of the expectation (or mean) of the inverse. Because the expectation of the \( {\varvec{\Phi}}_{\varepsilon } \) matrix is a null matrix, it is clear that

$$ \;E[({\varvec{\Phi}}_{0} + {\varvec{\Phi}}_{\varepsilon } )^{ - 1} ] = {\varvec{\Phi}}_{0}^{ - 1} + E[{\varvec{\Delta}}] $$
(14.42)

However, rather than calculating \( E[{\varvec{\Delta}}] \) directly from (14.40b) which involves an infinite series, an alternative more direct approach is taken. In particularly, from (14.7) the inverse of the 3 × 3 \( \,{\varvec{\Phi}}\, \) matrix can be determined directly. Consider one element of this inverse, namely

$$ [{\varvec{\Phi}}^{ - 1} ]_{1,1} = \frac{{N_{R} \sum\limits_{i} {\beta_{i}^{2} } - \left( {\sum\limits_{i} {\beta_{i} } } \right)^{2} }}{D} $$
(14.43a)

where the denominator is given by

$$ \begin{aligned} D = N_{R} \sum\limits_{i} {\alpha_{i}^{2} } \sum\limits_{i} {\beta_{i}^{2} } - N_{R} \left( {\sum\limits_{i} {\alpha_{i} \beta_{i} } } \right)^{2} - \sum\limits_{i} {\alpha_{i}^{2} } \left( {\sum\limits_{i} {\beta_{i} } } \right)^{2} \hfill \\ \, - \sum\limits_{i} {\beta_{i}^{2} } \left( {\sum\limits_{i} {\alpha_{i} } } \right)^{2} + 2\left( {\sum\limits_{i} {\alpha_{i} \beta_{i} } } \right)\left( {\sum\limits_{i} {\alpha_{i} } } \right)\left( {\sum\limits_{i} {\beta_{i} } } \right) \hfill \\ \end{aligned} $$
(14.43b)

The calculation of the expectation of (14.43a) cannot be performed by calculating the expectation of the numerator and denominator and dividing. However, the problem can be solved by expanding the inverse of the denominator as a Taylor series, so that the expectation can be determined as a summation of the expectation of each of the Taylor series terms. The expectation of (14.43b) can be calculated as

$$ E[D] = D_{0} = \frac{{N_{R} (N_{R} - 1)(N_{R} - 2)}}{4} $$
(14.44a)

so that

$$ D = D_{0} + d = D_{0} \left( {1 + \frac{d}{{D_{0} }}} \right) $$
(14.44b)

where

$$ d = D - D_{0} $$
(14.44c)

is the zero-mean random component. Applying (14.44b) to (14.43a) and expanding \( D^{ - 1} \) as a Taylor series yield

$$ \,[{\varvec{\Phi}}^{ - 1} ]_{1,1} = D_{0}^{ - 1} \left( {N_{R} \sum\limits_{i} {\beta_{i}^{2} } - \left( {\sum\limits_{i} {\beta_{i} } } \right)^{2} } \right)\left( {1 - \frac{d}{{D_{0} }} + \left( {\frac{d}{{D_{0} }}} \right)^{2} + \cdots } \right) $$
(14.45)

Finally the expected value of (14.43a) can be approximated by the summation of the expectation of the individual terms in (14.45), resulting in

$$ \begin{aligned} E[[{\varvec{\Phi}}^{ - 1} ]_{1,1} ] = &\,D_{0}^{ - 1} \left( {N_{R} E\left[ {\sum\limits_{i} {\beta_{i}^{2} } } \right] - E\left[ {\left( {\sum\limits_{i} {\beta_{i} } } \right)^{2} } \right]} \right)\left( {1 + E\left[ {\left( {\frac{d}{{D_{0} }}} \right)^{2} } \right]} \right) \\ = &\,\left( {\frac{4}{{N_{R} (N_{R} - 1)(N_{R} - 2)}}} \right)\,\left( {\frac{{N_{R}^{2} }}{2} - \frac{{N_{R} }}{2}} \right)\,\left( {1 + \left( {\frac{{\sigma_{d} }}{{D_{0} }}} \right)^{2} } \right) \\ \end{aligned} $$
(14.46)

where it has been assumed that \( d \) is statistically independent of the other random terms in (14.46), \( \,\sigma_{d} \) is the standard deviation of \( d \), and the Taylor series is limited to three terms. While it is possible to calculate \( \,\sigma_{d} \) analytically, the number of terms makes this calculation rather cumbersome. As the term involving \( \,\sigma_{d} \) in (14.46) is rather small, one approach is to simply ignore the small correction associated with it. In this case the expectation becomes

$$ E[[{\varvec{\Phi}}^{ - 1} ]_{1,1} ] \approx \frac{2}{{N_{R} - 2}}\, = \frac{2}{{N_{R} }} + \frac{4}{{N_{R}^{2} }} + \frac{8}{{N_{R}^{3} }} + \cdots \, $$
(14.47)

The first term in the series in (14.47) can be recognized as the inverse component for \( \,{\varvec{\Phi}}_{0}^{ - 1} \), which is first order inverse component from (14.41). The other terms in (14.47) are small corrections which approach zero as the number of nodes in range becomes large. Note also that including \( \,\sigma_{d} \) in the calculation only affects the last cubic term in (14.47). The other components of the inverse can be similarly calculated. In particular, the other two diagonal components are

$$ \begin{aligned} E[[{\varvec{\Phi}}^{ - 1} ]_{2,2} ] \approx \frac{2}{{N_{R} }} + \frac{4}{{N_{R}^{2} }} + \frac{8}{{N_{R}^{3} }} + \cdots \, \hfill \\ E[[{\varvec{\Phi}}^{ - 1} ]_{3,3} ] \approx \frac{1}{{N_{R} }} + \frac{2}{{N_{R}^{2} }} + \frac{4}{{N_{R}^{3} }} + \cdots \, \hfill \\ \end{aligned} $$
(14.48)

The remaining components of the expectation can be similarly calculated to be all zero. Thus ignoring the \( \,\sigma_{d} \) effect and limiting the series to three terms, the expected value of the inverse of the \( \,{\varvec{\Phi}}\, \) matrix is then as given by (14.17a, 14.17b).

Now consider the random component of the inverse. From (14.41), the first order solution to the random component of the inverse is

$$ {\tilde{\mathbf{\varPhi }}}_{\varepsilon } = - {\varvec{\Phi}}_{0}^{ - 1} {\varvec{\Phi}}_{\varepsilon } {\varvec{\Phi}}_{0}^{ - 1} $$
(14.49)

To calculate higher-order solutions, one needs to consider the random components of \( \,{\varvec{\Delta}} \). However, such analysis results in very complex analytical expressions, and thus only the first order solution (14.49) is used in the analysis.

Annex B: Calculation of Expectations

14.2.1 Expectation Calculation Based on Surface Integrals

This section briefly presents the derivation of the first expectation in Table 14.1, based on the surface integral technique. The required expectation is

$$ \;E[m_{x}^{2} ] = \frac{1}{{N_{R}^{2} }}E\left[ {\left( {\sum\limits_{i} {x_{i} } } \right)^{2} } \right] = \frac{1}{{N_{R}^{2} }}E\left[ {\sum\limits_{i} {x_{i}^{2} } } \right] $$
(14.50)

where the last expression in (14.50) is obtained because the mean of the x-coordinates is zero and the samples are statistically independent, as it is assumed that the distribution of anchor nodes is statistically uniform throughout a circular coverage area whose center is the origin of the coordinate system. The mean of the summation of the squared x-coordinate of the anchor node positions can be determined by use of an appropriate surface integral. That is, integrating over one quadrant of the circle produces

$$ \begin{aligned} E\left[ {\sum\limits_{i} {x_{i}^{2} } } \right] = & \sum\limits_{i} E [x_{i}^{2} ] = N_{R} E[x_{i}^{2} ] \\ = & \frac{{N_{R} }}{{\pi R_{\hbox{max} }^{2} /4}}\int\limits_{0}^{{R{}_{\hbox{max} }}} {\int\limits_{0}^{{\sqrt {R_{\hbox{max} }^{2} - y^{2} } }} {x^{2} dx\,dy} } = \frac{{N_{R} R_{\hbox{max} }^{2} }}{4} \\ \end{aligned} $$
(14.51)

Substituting the results in (14.51) into (14.50) produces

$$ E[m_{x}^{2} ] = \frac{1}{{N_{R}^{2} }}E\left[ {\sum\limits_{i} {x_{i}^{2} } } \right] = \frac{{R_{\hbox{max} }^{2} }}{{4N_{R} }} $$
(14.52)

14.2.2 Calculation of Expectation of Products of Means of Random Variables

This subsection gives an illustration of the method of calculating the expectation of the product of means of random variables, as described in Sect. 14.3.2, paragraph 2. The method can be applied generally, but will be illustrated with a particular example, namely the calculation of \( E\left[ {m_{x} m_{\alpha } m_{r} } \right] \)—see Table 14.1, item 6. The method is based on splitting the summations associated with the mean operations into two groups, one where all the indices are the same, and one where they are different. From the definition of a mean \( \left( m \right) \), the above expectation of the product of three means can be written as

$$ E\left[ {m_{x} m_{\alpha } m_{r} } \right] = \frac{1}{{N_{R}^{3} }}E\left[ {\sum\limits_{i} {x_{i} } \sum\limits_{j} {\alpha_{j} } \sum\limits_{k} {r_{k} } } \right] = \frac{1}{{N_{R}^{3} }}E\left[ {\sum\limits_{i} {\alpha_{i} r_{i} } \sum\limits_{j} {\alpha_{j} } \sum\limits_{k} {r_{k} } } \right] $$
(14.53)

where all the summations are over all nodes in range \( \left( {N_{R} } \right) \), and \( \,\alpha \), \( r \) are statistically independent random variables. First consider the group with all the indices the same. In this case (14.53) becomes

$$ \begin{aligned} \frac{1}{{N_{R}^{3} }}E\left[ {\sum\limits_{i} {\alpha_{i} r_{i} } \sum\limits_{j} {\alpha_{j} } \sum\limits_{k} {r_{k} } } \right] = &\,\frac{1}{{N_{R}^{3} }}E\left[ {\sum\limits_{i} {\alpha_{i}^{2} r_{i}^{2} } } \right] = \frac{1}{{N_{R}^{2} }}E\left[ {\alpha^{2} } \right]\,E\left[ {r^{2} } \right] \\ = &\,\frac{1}{{N_{R}^{2} }}\left( {\frac{1}{2}} \right)\left( {\frac{{R_{\hbox{max} }^{2} }}{2}} \right) = \frac{1}{4}\left( {\frac{{R_{\hbox{max} } }}{{N_{R} }}} \right)^{2} \\ \end{aligned} $$
(14.54)

where the expectation of \( \,r^{2} \) can be calculated by the surface integral method described in 14.B.1. Now consider the group where some or all the indices are different. In this case with \( (i,j,k,i \ne j \ne k)\, \) (14.53) becomes

$$ \begin{aligned} \frac{1}{{N_{R}^{3} }}E\left[ {\sum\limits_{i} {\alpha_{i} r_{i} } \sum\limits_{j} {\alpha_{j} } \sum\limits_{k} {r_{k} } } \right] = &\,\frac{1}{{N_{R}^{3} }}E\left[ {\sum\limits_{i} {\sum\limits_{j} {\sum\limits_{k} {\alpha_{i} \alpha_{j} r_{i} r_{k} } } } } \right] \\ = &\,\frac{1}{{N_{R}^{3} }}\sum\limits_{i} {\sum\limits_{j} {\sum\limits_{k} {E\left[ {\alpha_{i} } \right]} } } E\left[ {\alpha_{j} } \right]E\left[ {r_{i} } \right]E\left[ {r_{k} } \right] = 0 \\ \end{aligned} $$
(14.55)

as \( E\left[ \alpha \right] = 0 \), and as the indices are all different the random variables are statistically independent. Next consider the subgroup when \( x \) and \( \alpha \) have the same index, but \( r \) has a different index. In this case (14.53) becomes

$$ \begin{aligned} \frac{1}{{N_{R}^{3} }}E\left[ {\sum\limits_{i} {\alpha_{i} r_{i} } \sum\limits_{i} {\alpha_{i} } \sum\limits_{j} {r_{j} } } \right] = & \frac{1}{{N_{R}^{3} }}E\left[ {\sum\limits_{i} {\sum\limits_{j} {\alpha_{i}^{2} r_{i} r_{j} } } } \right] = \frac{1}{{N_{R}^{3} }}\sum\limits_{i} {\sum\limits_{j} {E\left[ {\alpha_{i}^{2} } \right]} } \,E\left[ {r_{i} r_{j} } \right] \\ = & \frac{{N_{R}^{2} - N_{R} }}{{N_{R}^{3} }}\left( {\frac{1}{2}} \right)E\left[ r \right]^{2} \quad \\ \end{aligned} $$
(14.56)

where the number of elements in the group is \( N_{R}^{2} - N_{R} \). Using the surface integral method described in 14.B.1, it can be shown that \( E\left[ r \right] = \frac{2}{3}R_{\hbox{max} } \), so that

$$ \frac{1}{{N_{R}^{3} }}E\left[ {\sum\limits_{i} {\alpha_{i} r_{i} } \sum\limits_{i} {\alpha_{i} } \sum\limits_{j} {r_{j} } } \right] = \frac{2}{9}\frac{{R_{\hbox{max} }^{2} }}{{N_{R} }} - \frac{2}{9}\left( {\frac{{R_{\hbox{max} } }}{{N_{R} }}} \right)^{2} $$
(14.57)

Similar types of calculations show that the expectation of all other subgroups is zero as \( E\left[ \alpha \right] = 0 \). Finally, combining (14.54) and (14.57), the required expectation in (14.53) (see also Table 14.1, item 6) is

$$ E\left[ {m_{x} ,m_{\alpha } ,m_{r} } \right] = \frac{2}{9}\frac{{R_{\hbox{max} }^{2} }}{{N_{R} }} + \frac{1}{36}\left( {\frac{{R_{\hbox{max} } }}{{N_{R} }}} \right)^{2} $$
(14.58)

The above example illustrates the method by which splitting the summations associated with the mean operation the expectation can be calculated, with most subgroups having a zero expectation.

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Sharp, I., Yu, K. (2019). GDOP Analysis for Positioning Design. In: Wireless Positioning: Principles and Practice. Navigation: Science and Technology. Springer, Singapore. https://doi.org/10.1007/978-981-10-8791-2_14

Download citation

  • DOI: https://doi.org/10.1007/978-981-10-8791-2_14

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-10-8790-5

  • Online ISBN: 978-981-10-8791-2

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics