Skip to main content
Log in

DOA estimation with double L-shaped array based on Hadamard product and joint diagonalization in the presence of sensor gain-phase errors

  • Published:
Multidimensional Systems and Signal Processing Aims and scope Submit manuscript

Abstract

A method with double L-shaped array for direction-of-arrival (DOA) estimation in the presence of sensor gain-phase errors is presented. The reason for choosing double L-shaped array is that the shared elements between sub-arrays are the most and rotation invariant property can be applied for this array. The proposed method is introduced as follows. (1) If the number of signal is one, first the gain errors are estimated and removed with the diagonal of the covariance matrix of the array output. Then the array is rotated by an unknown angle and DOA can be estimated with the relationship between signal subspace and steering vector of signal. (2) If signals are more than one, the method for eliminating gain errors is the same with the previous case, and then the phase errors are removed by the Hadamard product of the (cross) covariance matrix and its conjugate. After the errors are eliminated, the DOAs can be estimated by rotation invariant property and orthogonal joint diagonalization for the Hadamard product. This method requires neither calibrated sources nor multidimensional parameter search, and its performance is independent of the phase errors. Simulation results demonstrate the effectiveness of the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  • Blunt, S. D., Chan, T., & Gerlach, K. (2011). Robust DOA estimation: The reiterative superresolution (RISR) algorithm. IEEE Transactions on Aerospace and Electronic Systems, 47(1), 332–346.

    Article  Google Scholar 

  • Cao, S., & Ye, Z. (2013). A Hadamard product based method for DOA estimation and gain-phase error calibration. IEEE Trans on Aerospace and Electronic Systems, 49(2), 1224–1233.

    Article  MathSciNet  Google Scholar 

  • Capon, J. (1969). High-resolution frequency-wavenumber spectrum analysis. Proceedings of the IEEE, 57, 1408–1418.

    Article  Google Scholar 

  • Cheng, Q. (2000). Asymptotic performance of optimal gain- and-phase estimators of sensor arrays. IEEE Transactions on Signal Processing, 48(12), 3587–3590.

    Article  MATH  Google Scholar 

  • Ferréol, A., Larzabal, P., & Viberg, M. (2010). Statistical analysis of the MUSIC algorithm in the presence of modeling errors, taking into account the resolution probability. IEEE Transactions on Signal Processing, 58(8), 4156–4166.

    Article  MathSciNet  MATH  Google Scholar 

  • Friedlander, B., & Weiss, A. J. (1993). Performance of direction-finding systems with sensor gain and phase uncertainties. Circuits, Systems and Signal Processing, 12(1), 3–33.

    Article  MATH  Google Scholar 

  • Godara, L. C. (1997). Application of antenna arrays to mobile communications, Part II: Beam-forming and direction-of-arrival considerations. Proceedings of the IEEE, 85(8), 1195–1245.

    Article  Google Scholar 

  • Krim, J., & Viberg, M. (1996). Two decades of array signal processing research: The parametric approach. IEEE Signal Processing Magazine, 13(3), 67–94.

    Article  Google Scholar 

  • Li, J., Stoica, P., & Wang, Z. (2003). On robust Capon beamforming and diagonal loading. IEEE Transactions on Signal Processing, 51(7), 1702–1715.

    Article  Google Scholar 

  • Li, Y., et al. (2006). Theoretical analyses of gain and phase uncertainty calibration with optimal implementation for linear equispaced array. IEEE Transactions on Signal Processing, 54(2), 712–723.

    Article  MathSciNet  MATH  Google Scholar 

  • Liu, A., et al. (2011). An eigenstructure method for estimating DOA and sensor gain-phase errors. IEEE Transactions on Signal Processing, 59(2), 5944–5956.

    Article  MathSciNet  MATH  Google Scholar 

  • Ng, B. P., et al. (2009). A practical simple geometry and gain/phase calibration technique for antenna array processing. IEEE Transactions on Antennas and Propagation, 57(7), 1963–1972.

    Article  Google Scholar 

  • Paulraj, A., & Kailath, T. (1985). Direction of arrival estimation by Eigen-structure methods with unknown sensor gain and phase. In Proceedings of IEEE (ICASSP’85) (pp. 640–643).

  • Roy, R., & Kailath, T. (1989). ESPRIT-estimation of signal parameters via rotational invariance techniques. IEEE Transactions on Signal Processing, 37(7), 984–995.

    Article  MATH  Google Scholar 

  • Schmidt, R. O. (1986). Multiple emitter location and signal parameter estimation. IEEE Transactions on Antennas and Propagation, 34(3), 276–280.

    Article  Google Scholar 

  • Stoica, P., & Sharman, K. C. (1990). Maximum likelihood methods for direction of arrival estimation. IEEE Transactions on Acoustics, Speech, and Signal Processing, 38(7), 1132–1143.

    Article  MATH  Google Scholar 

  • Stoica, P., Wang, Z., & Li, J. (2005). Extended derivation of MUSIC in presence of steering vector errors. IEEE Transactions on Signal Processing, 53(3), 1209–1211.

    Article  Google Scholar 

  • Sylvie, M., Alain, M., & Messaoud, B. (1995). The propagator method for source bearing estimation. Signal Processing, 42(2), 121–138.

    Article  Google Scholar 

  • Wang, B. H., Wang, Y. L., & Chen, H. (2003). Array calibration of angularly dependent gain and phase uncertainties with instrumental sensors. In IEEE international symposium on phased array systems and technology (pp. 182–186).

  • Wang, B. H., Wang, Y. L., Chen, H., et al. (2004). Array calibration of angularly dependent gain and phase uncertainties with carry-on instrumental sensors. Science in China Series F-Information Sciences, 47(6), 777–792.

    Article  Google Scholar 

  • Weiss, A. J., & Friedlander, B. (1990). Eigenstructure methods for direction finding with sensor gain and phase uncertainties. Circuits, Systems and Signal Processing, 9(3), 271–300.

    Article  MathSciNet  MATH  Google Scholar 

  • Xie, W., Wang, C., Wen, F., et al. (2017a). DOA and gain-phase errors estimation for noncircular sources with central symmetric array. IEEE Sensors Journal, 17(10), 3068–3078.

    Article  Google Scholar 

  • Xie, W., Wen, F., Liu, J., & Wan, Q. (2017b). Source association, DOA, and fading coefficients estimation for multipath signals. IEEE Transactions on Signal Processing, 65(11), 2773–2786.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the anonymous reviewers for their many insightful comments and suggestions, which helped improve the quality and readability of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Weiwei Hu.

Additional information

This work was supported by NUPTSF (NY215012), the Key Research and Development Program of Jiangsu Province (BE2015701), the Natural Science Foundation of Jiangsu Province of China (BK20141426), the Qing Lan Project of Jiangsu Province of China (QL00516014) and the Jiangsu Overseas Research and Training Program for University Prominent Young and Middle-aged Teachers and Presidents.

Appendices

Appendix A: Proof of the Property

Assume that there can exist different DOA pairs \( (\theta_{k} ,\theta_{l} ) \ne (\theta_{i} ,\theta_{j} ) \)(\( \theta_{i} \ne \theta_{j} \), \( \theta_{k} \ne \theta_{l} \)), which can make \( {\varvec{\upgamma}}(\theta_{i} ,\theta_{j} ) = {\varvec{\upgamma}}(\theta_{k} ,\theta_{l} ) \). So each element in \( {\varvec{\upgamma}}(\theta_{i} ,\theta_{j} ) \) is the same with the corresponding one in \( {\varvec{\upgamma}}(\theta_{k} ,\theta_{l} ) \). Based on this assumption, we have

$$ {\varvec{\upgamma}}^{(N - 2)} (\theta_{i} ,\theta_{j} ) = {\varvec{\upgamma}}^{(N - 2)} (\theta_{k} ,\theta_{l} ) $$
(52)
$$ {\varvec{\upgamma}}^{(N)} (\theta_{i} ,\theta_{j} ) = {\varvec{\upgamma}}^{(N)} (\theta_{k} ,\theta_{l} ) $$
(53)

Expanding (52) and (53), we obtain

$$ e^{{ - j\frac{{2{\uppi }d_{x} (\sin \theta_{i} - \sin \theta_{j} )}}{\lambda }}} = e^{{ - j\frac{{2{\uppi }d_{x} (\sin \theta_{k} - \sin \theta_{l} )}}{\lambda }}} $$
(54)
$$ e^{{ - j\frac{{2{\uppi }d_{y} (\cos \theta_{i} - \cos \theta_{j} )}}{\lambda }}} = e^{{ - j\frac{{2{\uppi }d_{y} (\cos \theta_{k} - \cos \theta_{l} )}}{\lambda }}} $$
(55)

It follows from (54) and (55) that

$$ \frac{{d_{x} (\sin \theta_{i} - \sin \theta_{j} )}}{\lambda } = \frac{{d_{x} (\sin \theta_{k} - \sin \theta_{l} )}}{\lambda } + n $$
(56)
$$ \frac{{d_{y} (\cos \theta_{i} - \cos \theta_{j} )}}{\lambda } = \frac{{d_{y} (\cos \theta_{k} - \cos \theta_{l} )}}{\lambda } + m $$
(57)
$$ \left( {m,n = 0, \pm 1, \pm 2, \ldots } \right) $$

As \( \theta_{i} \), \( \theta_{j} \), \( \theta_{k} \) and \( \theta_{l} \) are all in the interval \( \left( {{{ - \uppi } \mathord{\left/ {\vphantom {{ - \uppi } {2,{\uppi \mathord{\left/ {\vphantom {\uppi 2}} \right. \kern-0pt} 2}}}} \right. \kern-0pt} {2,{\uppi \mathord{\left/ {\vphantom {\uppi 2}} \right. \kern-0pt} 2}}}} \right) \), and under the condition that \( d_{x} \) is less than one quarter of wavelength \( \lambda \) and \( d_{y} \) is less than the half, it can be seen that

$$ \frac{{d_{x} (\sin \theta_{i} - \sin \theta_{j} )}}{\lambda } - \frac{{d_{x} (\sin \theta_{k} - \sin \theta_{l} )}}{\lambda } = n \in ( - 1,1) $$
(58)
$$ \frac{{d_{y} (\cos \theta_{i} - \cos \theta_{j} )}}{\lambda } - \frac{{d_{y} (\cos \theta_{k} - \cos \theta_{l} )}}{\lambda } = m \in ( - 1,1) $$
(59)

From (58) and (59) we obtain

$$ n = m = 0 $$
(60)

Based on (56), (57) and (60) can be modified as

$$ \sin \theta_{i} - \sin \theta_{j} = \sin \theta_{k} - \sin \theta_{l} = p $$
(61)
$$ \cos \theta_{i} - \cos \theta_{j} = \cos \theta_{k} - \cos \theta_{l} = q $$
(62)

As sinusoidal function in the interval \( \left( {{{ - \;\uppi } \mathord{\left/ {\vphantom {{ - \;\uppi } {2,{\uppi \mathord{\left/ {\vphantom {\uppi 2}} \right. \kern-0pt} 2}}}} \right. \kern-0pt} {2,{\uppi \mathord{\left/ {\vphantom {\uppi 2}} \right. \kern-0pt} 2}}}} \right) \) is monotonically increasing, \( \sin \theta_{i} - \sin \theta_{j} \) can’t be zero. Meanwhile \( \cos \theta_{i} - \cos \theta_{j} \) may be zero due to non-monotonicity of cosine function in the same interval. So there are two cases on (62).

Case 1

$$ \cos \theta_{i} - \cos \theta_{j} = \cos \theta_{k} - \cos \theta_{l} = q = 0 $$
(63)

Combining (61) with (63), we have

$$ \theta_{i} = \theta_{k} = - \theta_{j} = - \theta_{l} = \arcsin \frac{p}{2} $$
(64)

which contradicts the assumption \( (\theta_{k} ,\theta_{l} ) \ne (\theta_{i} ,\theta_{j} ) \).

Case 2

$$ \cos \theta_{i} - \cos \theta_{j} = \cos \theta_{k} - \cos \theta_{l} = q \ne 0 $$
(65)

From (61) and (62) we have

$$ \tan \frac{{\theta_{i} - \theta_{j} }}{2} = \tan \frac{{\theta_{k} - \theta_{l} }}{2} = - \frac{q}{p} $$
(66)

So

$$ \cos^{2} \frac{{\theta_{i} + \theta_{j} }}{2} = \cos^{2} \frac{{\theta_{k} + \theta_{l} }}{2} = \frac{{q^{2} }}{{p^{2} + q^{2} }} $$
(67)

As \( \theta_{i} \), \( \theta_{j} \), \( \theta_{k} \) and \( \theta_{l} \) are all in the interval \( \left( {{{ - {\uppi }} \mathord{\left/ {\vphantom {{ - {\uppi }} {2,\,{{\uppi } \mathord{\left/ {\vphantom {{\uppi } 2}} \right. \kern-0pt} 2}}}} \right. \kern-0pt} {2,\,{{\uppi } \mathord{\left/ {\vphantom {{\uppi } 2}} \right. \kern-0pt} 2}}}} \right) \), \( \frac{{\theta_{i} + \theta_{j} }}{2} \in \left( {{{ - {\uppi }} \mathord{\left/ {\vphantom {{ - {\uppi }} {2,\,{{\uppi } \mathord{\left/ {\vphantom {{\uppi } 2}} \right. \kern-0pt} 2}}}} \right. \kern-0pt} {2,\,{{\uppi } \mathord{\left/ {\vphantom {{\uppi } 2}} \right. \kern-0pt} 2}}}} \right) \) and \( \frac{{\theta_{k} + \theta_{l} }}{2} \in \left( {{{ - {\uppi }} \mathord{\left/ {\vphantom {{ - {\uppi }} {2,{{\uppi } \mathord{\left/ {\vphantom {{\uppi } 2}} \right. \kern-0pt} 2}}}} \right. \kern-0pt} {2,{{\uppi } \mathord{\left/ {\vphantom {{\uppi } 2}} \right. \kern-0pt} 2}}}} \right) \). It follows (67) that

$$ \cos \frac{{\theta_{i} + \theta_{j} }}{2} = \cos \frac{{\theta_{k} + \theta_{l} }}{2} = \frac{\left| q \right|}{{\sqrt {p^{2} + q^{2} } }} $$
(68)

Submitting (61)–(68), we can obtain

$$ \sin \frac{{\theta_{i} - \theta_{j} }}{2} = \sin \frac{{\theta_{k} - \theta_{l} }}{2} = \frac{{p\sqrt {p^{2} + q^{2} } }}{2\left| q \right|} $$
(69)

Combining (68) with (69), we have

$$ \theta_{i} = \theta_{k} = \arccos \frac{\left| q \right|}{{p^{2} + q^{2} }} + \arcsin \frac{{p\sqrt {p^{2} + q^{2} } }}{2\left| q \right|} $$
(70)
$$ \theta_{j} = \theta_{l} = \arccos \frac{\left| q \right|}{{p^{2} + q^{2} }} - \arcsin \frac{{p\sqrt {p^{2} + q^{2} } }}{2\left| q \right|} $$
(71)

which also contradicts the assumption \( (\theta_{k} ,\theta_{l} ) \ne (\theta_{i} ,\theta_{j} ) \).

From cases 1 to 2, it can be seen that the assumption that \( (\theta_{k} ,\theta_{l} ) \ne (\theta_{i} ,\theta_{j} ) \)(\( \theta_{i} \ne \theta_{j} \), \( \theta_{k} \ne \theta_{l} \)) can’t hold. So there exist no DOA pairs \( (\theta_{k} ,\theta_{l} ) \ne (\theta_{i} ,\theta_{j} ) \) (\( \theta_{i} \ne \theta_{j} \), \( \theta_{k} \ne \theta_{l} \)) which can make \( {\varvec{\upgamma}}(\theta_{i} ,\theta_{j} ) = {\varvec{\upgamma}}(\theta_{k} ,\theta_{l} ) \).

In consequence, the proof of the property is completed, and it can also be regarded as a special case of Theorem 2 in Xie et al. (2017), which gives the proof by geometrical explanation of vectors on \( \theta_{p} ,\theta_{q} \).

Appendix B: Proof of the phase errors independence of DOA estimation when K = 1

Combining (15) with (16), we have

$$ {\bar{\mathbf{R}}}_{i} (\theta ) = \sigma^{2} {\varvec{\Psi}}_{i} {\varvec{\upalpha}}(\theta ){\varvec{\upalpha}}^{H} (\theta ){\varvec{\Psi}}_{i}^{H} = \bar{\gamma }_{i} (\theta ){\bar{\mathbf{u}}}_{i} (\theta ){\bar{\mathbf{u}}}_{i}^{H} (\theta ) $$
(72)

Now define a new covariance matrix \( {\bar{\mathbf{R}^{\prime}}}_{i} (\theta ) \) as (73)

$$ {\bar{\mathbf{R}^{\prime}}}_{i} (\theta ) = {\varvec{\Psi}}_{i}^{H} {\bar{\mathbf{R}}}_{i} (\theta ){\varvec{\Psi}}_{i} $$
(73)

So (73) can be written as

$$ {\bar{\mathbf{R}^{\prime}}}_{i} (\theta ) = {\varvec{\Psi}}_{i}^{H} {\bar{\mathbf{R}}}_{i} (\theta ){\varvec{\Psi}}_{i} = \sigma^{2} {\varvec{\upalpha}}(\theta ){\varvec{\upalpha}}^{H} (\theta ) = \bar{\gamma }_{i} (\theta ){\varvec{\Psi}}_{i}^{H} {\bar{\mathbf{u}}}_{i} (\theta )({\varvec{\Psi}}_{i}^{H} {\bar{\mathbf{u}}}_{i} (\theta ))^{H} $$
(74)

Because of the properties of \( {\varvec{\Psi}}_{i} \) that \( {\varvec{\Psi}}_{i} \) is a diagonal matrix and the absolute values of diagonal elements are equal to unity, it is noted that

$$ {\bar{\mathbf{u}}}_{i}^{H} (\theta ){\bar{\mathbf{u}}}_{i} (\theta ) = {\bar{\mathbf{u}}}_{i}^{H} (\theta ){\varvec{\Psi}}_{i} {\varvec{\Psi}}_{i}^{H} {\bar{\mathbf{u}}}_{i} (\theta ) = 1 $$
(75)

Owing to (75), we right multiply (74) by \( {\varvec{\Psi}}_{i}^{H} {\bar{\mathbf{u}}}_{i} (\theta ) \) and obtain that

$$ {\bar{\mathbf{R}^{\prime}}}_{i} (\theta ){\varvec{\Psi}}_{i}^{H} {\bar{\mathbf{u}}}_{i} (\theta ) = \bar{\gamma }_{i} (\theta ){\varvec{\Psi}}_{i}^{H} {\bar{\mathbf{u}}}_{i} (\theta ) $$
(76)

which indicates that \( {\varvec{\Psi}}_{i}^{H} {\bar{\mathbf{u}}}_{i} (\theta ) \) is the eigenvector of \( {\bar{\mathbf{R}^{\prime}}}_{i} (\theta ) \) corresponding to the only non-zero Eigen-value.

As \( {\bar{\mathbf{R}^{\prime}}}_{i} (\theta ) = \sigma^{2} {\varvec{\upalpha}}(\theta ){\varvec{\upalpha}}^{H} (\theta ) \) is independent of the phase errors, its eigenvector \( {\varvec{\Psi}}_{i}^{H} {\bar{\mathbf{u}}}_{i} (\theta ) \) must be independent of the phase errors.

Similarly, \( {\varvec{\Psi}}_{i}^{H} {\bar{\mathbf{u}}}_{i} (\theta + \Delta \theta ) \) must be independent of the phase errors.

So (22) can be rewritten with \( {\varvec{\Psi}}_{i}^{H} {\bar{\mathbf{u}}}_{i} (\theta ) \)

$$ \begin{aligned} \sin (\theta + \Delta \theta ) - \sin \theta = \frac{{\lambda \cdot \angle \left\{ {{{\left( {\frac{{{\bar{\mathbf{u}}}_{i}^{(N - 2)} (\theta )}}{{\xi_{i} (\theta )}}} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{{\bar{\mathbf{u}}}_{i}^{(N - 2)} (\theta )}}{{\xi_{i} (\theta )}}} \right)} {\left( {\frac{{{\bar{\mathbf{u}}}_{i}^{(N - 2)} (\theta + \Delta \theta )}}{{\xi_{i} (\theta + \Delta \theta )}}} \right)}}} \right. \kern-0pt} {\left( {\frac{{{\bar{\mathbf{u}}}_{i}^{(N - 2)} (\theta + \Delta \theta )}}{{\xi_{i} (\theta + \Delta \theta )}}} \right)}}} \right\}}}{{2{\uppi }d_{x} }} \hfill \\ = \frac{{\lambda \cdot \angle \left\{ {{{\left( {\frac{{{\bar{\mathbf{u}}}_{i}^{(N - 2)} (\theta )}}{{{\bar{\mathbf{u}}}_{i}^{(N - 1)} (\theta )}}} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{{\bar{\mathbf{u}}}_{i}^{(N - 2)} (\theta )}}{{{\bar{\mathbf{u}}}_{i}^{(N - 1)} (\theta )}}} \right)} {\left( {\frac{{{\bar{\mathbf{u}}}_{i}^{(N - 2)} (\theta + \Delta \theta )}}{{{\bar{\mathbf{u}}}_{i}^{(N - 1)} (\theta + \Delta \theta )}}} \right)}}} \right. \kern-0pt} {\left( {\frac{{{\bar{\mathbf{u}}}_{i}^{(N - 2)} (\theta + \Delta \theta )}}{{{\bar{\mathbf{u}}}_{i}^{(N - 1)} (\theta + \Delta \theta )}}} \right)}}} \right\}}}{{2{\uppi }d_{x} }} \hfill \\ = \frac{{\lambda \cdot \angle \left\{ {{{\left( {\frac{{{\varvec{\Psi}}_{i}^{H} {\bar{\mathbf{u}}}_{i}^{(N - 2)} (\theta )}}{{{\varvec{\Psi}}_{i}^{H} {\bar{\mathbf{u}}}_{i}^{(N - 1)} (\theta )}}} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{{\varvec{\Psi}}_{i}^{H} {\bar{\mathbf{u}}}_{i}^{(N - 2)} (\theta )}}{{{\varvec{\Psi}}_{i}^{H} {\bar{\mathbf{u}}}_{i}^{(N - 1)} (\theta )}}} \right)} {\left( {\frac{{{\varvec{\Psi}}_{i}^{H} {\bar{\mathbf{u}}}_{i}^{(N - 2)} (\theta + \Delta \theta )}}{{{\varvec{\Psi}}_{i}^{H} {\bar{\mathbf{u}}}_{i}^{(N - 1)} (\theta + \Delta \theta )}}} \right)}}} \right. \kern-0pt} {\left( {\frac{{{\varvec{\Psi}}_{i}^{H} {\bar{\mathbf{u}}}_{i}^{(N - 2)} (\theta + \Delta \theta )}}{{{\varvec{\Psi}}_{i}^{H} {\bar{\mathbf{u}}}_{i}^{(N - 1)} (\theta + \Delta \theta )}}} \right)}}} \right\}}}{{2{\uppi }d_{x} }} \hfill \\ \end{aligned} $$
(77)

As \( {\varvec{\Psi}}_{i}^{H} {\bar{\mathbf{u}}}_{i} (\theta ) \) and \( {\varvec{\Psi}}_{i}^{H} {\bar{\mathbf{u}}}_{i} (\theta + \Delta \theta ) \) are both independent of the phase errors, their elements \( {\varvec{\Psi}}_{i}^{H} {\bar{\mathbf{u}}}_{i}^{(N - 1)} (\theta ) \), \( {\varvec{\Psi}}_{i}^{H} {\bar{\mathbf{u}}}_{i}^{(N - 1)} (\theta + \Delta \theta ) \), \( {\varvec{\Psi}}_{i}^{H} {\bar{\mathbf{u}}}_{i}^{(N - 2)} (\theta ) \) and \( {\varvec{\Psi}}_{i}^{H} {\bar{\mathbf{u}}}_{i}^{(N - 2)} (\theta + \Delta \theta ) \) in (77) are all independent of the phase errors.

The independence of (23) can be proved in the same way.

Consequently, (22) and (23) are both independent of the phase errors, so DOA estimated from them must be independent of phase errors.

Appendix C: Realization of joint diagonalization

Similar to use Jacobi technique to achieve Eigen-decomposition, we can also extend it to the joint diagonalization of a set of normal matrices. Just like the realization of Eigen-decomposition of single matrix, joint diagonalization can be carried out by successive Givens rotations, and each rotation leads to maximize criterion (47).

Considering a rotation indexed by (\( \varepsilon ,\eta \)), the Givens rotation matrix \( {\mathbf{g}}(\varepsilon ,\eta ,\vartheta ) \) can be expressed as

$$ \begin{aligned} {\mathbf{g}}(\varepsilon ,\eta ,\vartheta ) =&\left[ {\begin{array}{*{20}c} 1 &\cdots &0 &\cdots &0 &\cdots &0 \\ \vdots &\ddots &\vdots &{} &\vdots &{} &\vdots \\ 0 &\cdots &{c^{ * } } &\cdots &{s^{ * } } &\cdots &0 \\ \vdots &{} &\vdots &\ddots &\vdots &{} &\vdots \\ 0 &\cdots &{ - s} &\cdots &c &\cdots &0 \\ \vdots &{} &\vdots &{} &\vdots &\ddots &\vdots \\ 0 &\cdots &0 &\cdots &0 &\cdots &1 \\ \end{array} } \right]\quad \begin{array}{*{20}c} {} \\ {} \\ \varepsilon \\ {} \\ \eta \\ {} \\ {} \\ \end{array} \hfill \\ &\qquad \,\,\begin{array}{llllll} &&&{\;\;\,\varepsilon } &{} &{\qquad\,\eta } \\ \end{array} \hfill \\ \end{aligned} $$
(78)

where \( \varepsilon = 1,2, \cdots K(K - 1) - 1 \), \( \eta = \varepsilon + 1,\varepsilon + 2, \cdots ,K(K - 1) \).

As \( {\tilde{\mathbf{R}^{\prime}}}_{YX} ({\varvec{\uptheta}}) \) and \( {\tilde{\mathbf{R}^{\prime}}}_{ZX} ({\varvec{\uptheta}}) \) are real symmetric matrices, the Eigen-matrix must be real-valued. So real-valued \( c \) and \( s \) can be represented by one parameter \( \vartheta \) as

$$ c = \cos \vartheta \quad s = \sin \vartheta $$
(79)

With the property of Givens rotation matrix, the problem of (48) can be transformed to the following formula at each rotation:

$$ \mathop {\hbox{max} }\limits_{{\mathbf{g}}} \;\sum\limits_{i = Y,Z} {\left\| {\text{diag}\left( {{\tilde{\boldsymbol{\Gamma }}}_{iX} ({\varvec{\uptheta}}) = {\mathbf{g}}(\varepsilon ,\eta ,\vartheta ){\tilde{\mathbf{R}^{\prime}}}_{iX} ({\varvec{\uptheta}}){\mathbf{g}}^{H} (\varepsilon ,\eta ,\vartheta )} \right)} \right\|_{2}^{2} } $$
(80)

Expanding (80), we can obtain

$$ \mathop {\hbox{max} }\limits_{\vartheta } \;\sum\limits_{i = Y,Z} {[({\tilde{\boldsymbol{\Gamma }}}_{iX}^{(\varepsilon ,\varepsilon )} ({\varvec{\uptheta}}))^{2} + ({\tilde{\boldsymbol{\Gamma }}}_{iX}^{(\eta ,\eta )} ({\varvec{\uptheta}}))^{2} ]} $$
(81)

Noticing that

$$ \begin{aligned} & 2[({\tilde{\boldsymbol{\Gamma }}}_{iX}^{(\varepsilon ,\varepsilon )} ({\varvec{\uptheta}}))^{2} + ({\tilde{\boldsymbol{\Gamma }}}_{iX}^{(\eta ,\eta )} ({\varvec{\uptheta}}))^{2} ] \\ & \quad = \left({\tilde{\boldsymbol{\Gamma }}}_{iX}^{(\varepsilon ,\varepsilon )} ({\varvec{\uptheta}}) + {\tilde{\boldsymbol{\Gamma }}}_{iX}^{(\eta ,\eta )} ({\varvec{\uptheta}}))^{2} + ({\tilde{\boldsymbol{\Gamma }}}_{iX}^{(\varepsilon ,\varepsilon )} ({\varvec{\uptheta}}) - {\tilde{\boldsymbol{\Gamma }}}_{iX}^{(\eta ,\eta )} ({\varvec{\uptheta}})\right)^{2} \\ \end{aligned} $$
(82)

and that the trace \( {\tilde{\boldsymbol{\Gamma }}}_{iX}^{(\varepsilon ,\varepsilon )} ({\varvec{\uptheta}}) + {\tilde{\boldsymbol{\Gamma }}}_{iX}^{(\eta ,\eta )} ({\varvec{\uptheta}}) \) is invariant in a unitary transformation, at each Givens step optimization of criterion (81) is equivalent to

$$ \mathop {{\text{max} }}\limits_{\vartheta } \,\sum\limits_{i = Y,Z} {\left({\tilde{\boldsymbol{\Gamma }}}_{iX}^{(\varepsilon ,\varepsilon )} ({\varvec{\uptheta}}) - {\tilde{\boldsymbol{\Gamma }}}_{iX}^{(\eta ,\eta )} ({\varvec{\uptheta}})\right)^{2} } $$
(83)

It is checked that

$$ \begin{aligned} & {\tilde{\boldsymbol{\Gamma }}}_{iX}^{(\varepsilon ,\varepsilon )} ({\varvec{\uptheta}}) - {\tilde{\boldsymbol{\Gamma }}}_{iX}^{(\eta ,\eta )} ({\varvec{\uptheta}}) \\ & \quad = ({\tilde{\mathbf{R}^{\prime}}}_{iX}^{(\varepsilon ,\varepsilon )} ({\varvec{\uptheta}}) - {\tilde{\mathbf{R}^{\prime}}}_{iX}^{(\eta ,\eta )} ({\varvec{\uptheta}}))(c^{2} - s^{2} ) - ({\tilde{\mathbf{R}^{\prime}}}_{iX}^{(\varepsilon ,\eta )} ({\varvec{\uptheta}}) + {\tilde{\mathbf{R}^{\prime}}}_{iX}^{(\eta ,\varepsilon )} ({\varvec{\uptheta}}))2cs \\ & \quad = ({\tilde{\mathbf{R}^{\prime}}}_{iX}^{(\varepsilon ,\varepsilon )} ({\varvec{\uptheta}}) - {\tilde{\mathbf{R}^{\prime}}}_{iX}^{(\eta ,\eta )} ({\varvec{\uptheta}}))\cos 2\vartheta - ({\tilde{\mathbf{R}^{\prime}}}_{iX}^{(\varepsilon ,\eta )} ({\varvec{\uptheta}}) + {\tilde{\mathbf{R}^{\prime}}}_{iX}^{(\eta ,\varepsilon )} ({\varvec{\uptheta}}))\sin 2\vartheta \\ & \quad = \left[ {\begin{array}{*{20}c} { - ({\tilde{\mathbf{R}^{\prime}}}_{iX}^{(\varepsilon ,\eta )} ({\varvec{\uptheta}}) + {\tilde{\mathbf{R}^{\prime}}}_{iX}^{(\eta ,\varepsilon )} ({\varvec{\uptheta}}))} & {({\tilde{\mathbf{R}^{\prime}}}_{iX}^{(\varepsilon ,\varepsilon )} ({\varvec{\uptheta}}) - {\tilde{\mathbf{R}^{\prime}}}_{iX}^{(\eta ,\eta )} ({\varvec{\uptheta}}))} \\ \end{array} } \right]\,\left[ {\begin{array}{*{20}c} {\sin 2\vartheta } \\ {\cos 2\vartheta } \\ \end{array} } \right] \\ & \quad = {\varvec{\upalpha}}_{iX}^{T} (\varepsilon ,\eta ){\varvec{\upbeta}}(\vartheta ) \\ \end{aligned} $$
(84)

where

$$ \begin{aligned} {\varvec{\upalpha}}_{iX} (\varepsilon ,\eta ) & = \left[ {\begin{array}{*{20}c} { - ({\tilde{\mathbf{R}^{\prime}}}_{iX}^{(\varepsilon ,\eta )} ({\varvec{\uptheta}}) + {\tilde{\mathbf{R}^{\prime}}}_{iX}^{(\eta ,\varepsilon )} ({\varvec{\uptheta}}))} & {({\tilde{\mathbf{R}^{\prime}}}_{iX}^{(\varepsilon ,\varepsilon )} ({\varvec{\uptheta}}) - {\tilde{\mathbf{R}^{\prime}}}_{iX}^{(\eta ,\eta )} ({\varvec{\uptheta}}))} \\ \end{array} } \right]^{T} \\ {\varvec{\upbeta}}(\vartheta ) & = \left[ {\begin{array}{*{20}c} {\sin 2\vartheta } \\ {\cos 2\vartheta } \\ \end{array} } \right] \\ \end{aligned} $$

So (83) can be rewritten as

$$ \begin{aligned} & \mathop {\hbox{max} }\limits_{\vartheta } \;\sum\limits_{i = Y,Z} {{\varvec{\upbeta}}^{T} (\vartheta ){\varvec{\upalpha}}_{iX} (\varepsilon ,\eta ){\varvec{\upalpha}}_{iX}^{T} (\varepsilon ,\eta ){\varvec{\upbeta}}(\vartheta )} \\ & \quad = \mathop {\hbox{max} }\limits_{\vartheta } {\varvec{\upbeta}}^{T} (\vartheta ){\mathbf{G}}(\varepsilon ,\eta ){\mathbf{G}}^{T} (\varepsilon ,\eta ){\varvec{\upbeta}}(\vartheta ) \\ & \quad = \mathop {\hbox{max} }\limits_{\vartheta } \frac{{{\varvec{\upbeta}}^{T} (\vartheta ){\mathbf{G}}(\varepsilon ,\eta ){\mathbf{G}}^{T} (\varepsilon ,\eta ){\varvec{\upbeta}}(\vartheta )}}{{{\varvec{\upbeta}}^{T} (\vartheta ){\varvec{\upbeta}}(\vartheta )}} \\ \end{aligned} $$
(85)

where \( {\mathbf{G}}(\varepsilon ,\eta ){\mathbf{G}}^{T} (\varepsilon ,\eta ) = \sum\nolimits_{i = Y,Z} {{\varvec{\upalpha}}_{iX} (\varepsilon ,\eta ){\varvec{\upalpha}}_{iX}^{T} (\varepsilon ,\eta )} \).

Maximizing a quadratic form under the unit norm constraint of its argument is classically obtained by taking \( {\varvec{\upbeta}}(\vartheta ) \) to be the eigenvector of \( {\mathbf{G}}(\varepsilon ,\eta ){\mathbf{G}}^{T} (\varepsilon ,\eta ) \) associated with the largest Eigen-value. Once \( {\varvec{\upbeta}}(\vartheta ) \) is obtained, the Givens rotation matrix \( {\mathbf{g}}(\varepsilon ,\eta ,\vartheta ) \) can be obtained.

Joint diagonalization of a set of normal matrices can be achieved by successive Givens rotations, in other words it can be carried out by product of successive Givens rotation matrices. The product of successive Givens rotation matrices can be considered as a joint diagonalizer and as the iteration has been finished, \( {\tilde{\boldsymbol{\Gamma }}}_{YX} ({\varvec{\uptheta}}) \) (\( {\tilde{\boldsymbol{\Gamma }}}_{ZX} ({\varvec{\uptheta}}) \)) can be seen as \( {\tilde{\boldsymbol{\Phi}}^{\prime}}({\varvec{\uptheta}}) \) (\( {\tilde{\boldsymbol{\Omega}}^{\prime}}({\varvec{\uptheta}}) \)) respectively.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hu, W., Xu, G. DOA estimation with double L-shaped array based on Hadamard product and joint diagonalization in the presence of sensor gain-phase errors. Multidim Syst Sign Process 30, 465–491 (2019). https://doi.org/10.1007/s11045-018-0565-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11045-018-0565-5

Keywords

Navigation