Skip to main content

Random Amplitude Sinusoidal and Chirp Model

  • Chapter
  • First Online:
Statistical Signal Processing
  • 737 Accesses

Abstract

In this monograph, we have considered sinusoidal frequency model and many of its variants in one and higher dimensions. In all these models considered so far, amplitudes are assumed to be unknown constants. In this chapter, we allow the amplitudes to be random or some deterministic function of the index. Such random amplitude sinusoidal and chirp models generalize most of the models considered previously. A different set of assumptions have also been required. We discuss estimation procedures of the unknown parameters of the different models and discuss their theoretical properties. Several open problems also have been indicated.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 129.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Besson, O. (1995). Improved detection of a random amplitude sinusoid by constrained least-squares technique. Signal Processing, 45(3), 347–356.

    Google Scholar 

  2. Besson, O., & Castanie, F. (1993). On estimating the frequency of a sinusoid in autoregressive multiplicative noise. Signal Processing, 3(1), 65–83.

    Article  Google Scholar 

  3. Besson, O., Ghogho, M., & Swami, A. (1999). Parameter estimation for random amplitude chirp signals. IEEE Transactions on Signal Processing, 47(12), 3208–3219.

    Article  Google Scholar 

  4. Besson, O., & Stoica, P. (1995). Sinusoidal signals with random amplitude: Least-squares estimation and their statistical analysis. IEEE Transactions on Signal Processing, 43, 2733–2744.

    Google Scholar 

  5. Besson, O., & Stoica, P. (1999). Nonlinear least-squares approach to frequency estimation and detection for sinusoidal signals with arbitrary envelope. Digital Signal Processing, 9(1), 45–56.

    Article  Google Scholar 

  6. Ciblata, P., Ghoghob, M., Forsterc, P., & Larzabald, P. (2005). Harmonic retrieval in the presence of non-circular Gaussian multiplicative noise: Performance bounds. Signal Processing, 85, 737–749.

    Article  Google Scholar 

  7. Fourt, O., & Benidir, M. (2009). Parameter estimation for polynomial phase signals with a fast and robust algorithm. In 17th European Signal Processing Conference (EUSIPCO 2009) (pp. 1027–1031).

    Google Scholar 

  8. Francos, J. M., & Friedlander, B. (1995). Bounds for estimation of multicomponent signals with random amplitudes and deterministic phase. IEEE Transactions on Signal Processing, 43, 1161 – 1172.

    Google Scholar 

  9. Gabor, D. (1946). Theory of communication. Part 1: The analysis of information. Journal of the Institution of Electrical Engineers - Part III: Radio and Communication Engineering, 93, 429–441.

    Google Scholar 

  10. Ghogho, M., Nandi, A. K., & Swami, A. (1999). Crame-Rao bounds and maximum likelihood estimation for random amplitude phase-modulated signals. IEEE Transactions on Signal Processing, 47(11), 2905–2916.

    Article  Google Scholar 

  11. Grover, R., Kundu, D., & Mitra, A. (2018). On approximate least squares estimators of parameters on one dimensional chirp signal. Statistics, 52, 1060–1085.

    Article  MathSciNet  Google Scholar 

  12. Kundu, D., & Nandi, S. (2012). Statistical Signal Processing: Frequency Estimation. New Delhi: Springer.

    Book  Google Scholar 

  13. Lahiri, A., Kundu, D., & Mitra, A. (2015). Estimating the parameters of multiple chirp signals. Journal of Multivariate Analysis, 139, 189–205.

    Article  MathSciNet  Google Scholar 

  14. Morelande, M. R., & Zoubir, A. M. (2002). Model selection of random amplitude polynomial phase signals. IEEE Transactions on Signal Processing, 50(3), 578–589.

    Article  Google Scholar 

  15. Nandi, S., & Kundu, D. (2004). Asymptotic properties of the least squares estimators of the parameters of the chirp signals. Annals of the Institute of Statistical Mathematics, 56, 529–544.

    Article  MathSciNet  Google Scholar 

  16. Nandi, S., & Kundu, D. (2020). Estimation of parameters in random amplitude chirp signal. Signal Processing, 168, Art. 107328.

    Google Scholar 

  17. Stoica, P., & Moses, R. (2005). Spectral analysis of signals. Upper Saddle River: Prentice Hall.

    Google Scholar 

  18. Zhou, G., & Giannakis, G. B. (1994). On estimating random amplitude modulated harmonics using higher order spectra. IEEE Transactions on Oceanic Engineering, 19(4), 529–539.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Swagata Nandi .

Appendices

Appendix A

In this Appendix, the proof of the consistency of \(\widehat{{\varvec{\theta }}}\), defined in Sect. 10.2, is given. Write \(z(t) = y^2(t) = z_R(t) + i z_I(t)\), then

$$\begin{aligned} z_R(t)= & {} \alpha ^2(t)\cos (2(\theta _1^0 t + \theta _2^0 t^2)) + (e_R^2(t) - e_I^2(t)) \nonumber \\&+ 2\alpha (t) e_R(t) \cos (\theta _1^0 t + \theta _2^0 t^2) - 2\alpha (t) e_I(t) \sin (\theta _1^0 t + \theta _2^0 t^2), \end{aligned}$$
(10.14)
$$\begin{aligned} z_I(t)= & {} \alpha ^2(t)\sin (2(\theta _1^0 t + \theta _2^0 t^2)) + 2\alpha (t) e_I(t) \cos (\theta _1^0 t + \theta _2^0 t^2) \nonumber \\&+ 2\alpha (t) e_R(t) \sin (\theta _1^0 t + \theta _2^0 t^2) + 2 e_R(t) e_I(t). \end{aligned}$$
(10.15)

The following lemmas will be required to prove the result.

Lemma 10.1

(Lahiri, Kundu and Mitra [13]) If \((\theta _1, \theta _2) \in (0,\pi )\times (0,\pi )\), then except for a countable number of points, the following results are true.

$$\begin{aligned}&\lim _{n\rightarrow \infty } \frac{1}{n} \sum _{t=1}^n \cos (\theta _1 t +\theta _2 t^2) = \lim _{n\rightarrow \infty } \frac{1}{n} \sum _{t=1}^n \sin (\theta _1 t +\theta _2 t^2) = 0, \\&\lim _{n\rightarrow \infty } \frac{1}{n^{k+1}} \sum _{t=1}^n t^k\cos ^2(\theta _1 t +\theta _2 t^2) = \lim _{n\rightarrow \infty } \frac{1}{n^{k+1}} \sum _{t=1}^n t^k \sin ^2(\theta _1 t +\theta _2 t^2) = \frac{1}{2(k+1)}, \\&\lim _{n\rightarrow \infty } \frac{1}{n^{k+1}} \sum _{t=1}^n t^k\cos (\theta _1 t +\theta _2 t^2) \sin (\theta _1 t +\theta _2 t^2) = 0, \;\;\; k=0,1,2. \\ \end{aligned}$$

Lemma 10.2

Let \(\widehat{{\varvec{\theta }}}=(\widehat{\theta }_1, \widehat{\theta }_2)\) be an estimate of \({{\varvec{\theta }}}^0 = (\theta _1^0, \theta _2^0)\) that maximizes \(Q({\varvec{\theta }})\), defined in (10.8) and for any \(\varepsilon >0\), let \(S_\varepsilon =\bigl \lbrace {{\varvec{\theta }}}: |{{\varvec{\theta }}} - {{\varvec{\theta }}}^0| > \varepsilon \bigr \rbrace \) for some fixed \({{\varvec{\theta }}}^0 \in (0,\pi )\times (0,\pi )\). If for any \(\varepsilon >0\),

$$\begin{aligned} \overline{\lim }_{n\rightarrow \infty } \sup _{S_\varepsilon } \frac{1}{n} \bigl [ Q({{\varvec{\theta }}}) - Q({{\varvec{\theta }}}^0) \bigr ] < 0, \;\; \text{ a.s. } \end{aligned}$$
(10.16)

then as \(n\rightarrow \infty \), \(\widehat{{\varvec{\theta }}} \rightarrow {{\varvec{\theta }}}^0\) a.s., that is, \(\widehat{\theta }_1 \rightarrow \theta _1^0\) and \(\widehat{\theta }_2 \rightarrow \theta _2^0\) a.s.

Proof of Lemma 10.2 We write \(\widehat{{\varvec{\theta }}}\) by \(\widehat{{\varvec{\theta }}}_n\) and \(Q({{\varvec{\theta }}})\) by \(Q_n({{\varvec{\theta }}})\) to emphasize that these quantities depend on n. Suppose (10.16) is true but \(\widehat{{\varvec{\theta }}}_n\) does not converge to \({{\varvec{\theta }}}^0\) as \(n\rightarrow \infty \). Then, there exists an \(\varepsilon >0\) and a subsequence \(\{n_k\}\) of \(\{n\}\) such that \(|\widehat{{\varvec{\theta }}}_{n_k} - {{\varvec{\theta }}}^0| > \varepsilon \) for \(k=1,2,\ldots \). Therefore, \(\widehat{{\varvec{\theta }}}_{n_k} \in S_\varepsilon \) for all \(k=1,2,\ldots \). By definition, \(\widehat{{\varvec{\theta }}}_{n_k}\) is the estimator of \({{\varvec{\theta }}}^0\) that maximizes \(Q_{n_k}({{\varvec{\theta }}})\) when \(n=n_k\). This implies that

$$\begin{aligned} Q_{n_k}(\widehat{{\varvec{\theta }}}_{n_k}) \ge Q_{n_k}({{\varvec{\theta }}}^0) \Rightarrow \frac{1}{n_k} \Bigl [ Q_{n_k}(\widehat{{\varvec{\theta }}}_{n_k}) - Q_{n_k}({{\varvec{\theta }}}^0) \Bigl ] \ge 0. \end{aligned}$$

Therefore, \(\displaystyle \overline{\lim }_{n\rightarrow \infty } \sup _{{{\varvec{\theta }}}_{n_k} \in S_\varepsilon } \frac{1}{n_k} \bigl [ Q_{n_k}(\widehat{{\varvec{\theta }}}_{n_k}) - Q_{n_k}({{\varvec{\theta }}}^0) \bigr ] \ge 0\), which contradicts inequality (10.16). Hence, the result follows. \(\blacksquare \)

Lemma 10.3

(Nandi and Kundu [15]) Let \(\{e(t)\}\) be a sequence of i.i.d. real-valued random variables with mean zero and finite variance \(\sigma ^2 >0\), then as \(n \rightarrow \infty \),

$$ \sup _{a,b} \left| \frac{1}{n} \sum _{t=1}^n e(t) \cos (a t) \cos (b t^2) \right| {\mathop {\longrightarrow }\limits ^{a.s.}} 0. $$

The result is true for all combinations of sine and cosine functions.

Lemma 10.4

(Grover, Kundu, and Mitra [11]) If \((\beta _1, \beta _2) \in (0, \pi ) \times (0, \pi )\), then except for a countable number of points, the following results hold.

$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{1}{n^{\frac{2m+1}{2}}} \sum _{t=1}^n t^m \cos (\beta _1 t + \beta _2 t^2) = \lim _{n\rightarrow \infty } \frac{1}{n^{\frac{2m+1}{2}}} \sum _{t=1}^n t^m \sin (\beta _1 t + \beta _2 t^2) = 0, \;\; m=0,1,2. \end{aligned}$$

Lemma 10.5

Under the same set-up as in Sect. 10.2, the following results are true for model (10.6).

$$\begin{aligned} \frac{1}{n^{m+1}} \sum _{t=1}^n t^m z_R(t) \cos (2\theta _1^0 t + 2\theta _2^0 t^2)&{\mathop {\longrightarrow }\limits ^{a.s}}&\frac{1}{2(m+1)} (\sigma _\alpha ^2 + \mu _\alpha ^2), \end{aligned}$$
(10.17)
$$\begin{aligned} \frac{1}{n^{m+1}} \sum _{t=1}^n t^m z_I(t) \cos (2\theta _1^0 t + 2\theta _2^0 t^2)&{\mathop {\longrightarrow }\limits ^{a.s}}&0, \end{aligned}$$
(10.18)
$$\begin{aligned} \frac{1}{n^{m+1}} \sum _{t=1}^n t^m z_R(t) \sin (2\theta _1^0 t + 2\theta _2^0 t^2)&{\mathop {\longrightarrow }\limits ^{a.s}}&0, \end{aligned}$$
(10.19)
$$\begin{aligned} \frac{1}{n^{m+1}} \sum _{t=1}^n t^m z_I(t) \sin (2\theta _1^0 t + 2\theta _2^0 t^2)&{\mathop {\longrightarrow }\limits ^{a.s}}&\frac{1}{2(m+1)} (\sigma _\alpha ^2 + \mu _\alpha ^2), \end{aligned}$$
(10.20)

for \(m=0,1,\ldots ,4\).

Proof of Lemma 10.5: Note that \(E[e_R(t) e_I(t)]=0\) and \(\hbox {Var}[e_R(t)e_I(t)] = \frac{\sigma ^4}{2}\) because \(e_R(t)\) and \(e_I(t)\) are each independently distributed with mean 0, variance \(\frac{\sigma ^2}{2}\) and fourth moment \(\gamma \). Therefore, \(\{e_R(t)e_I(t)\} {\mathop {\sim }\limits ^{i.i.d.}}(0, \frac{\sigma ^4}{2})\). Similarly, we can show that

$$\begin{aligned}&\{e_R^2(t)-e_I^2(t)\} {\mathop {\sim }\limits ^{i.i.d.}}(0, 2\gamma -\frac{\sigma ^4}{2}), \;\;\;\; \{\alpha (t)e_R(t)\} {\mathop {\sim }\limits ^{i.i.d.}} (0, (\sigma _\alpha ^2 + \mu _\alpha ^2)\frac{\sigma ^2}{2}), \nonumber \\&\{\alpha (t)e_I(t)\} {\mathop {\sim }\limits ^{i.i.d.}}(0, (\sigma _\alpha ^2 + \mu _\alpha ^2)\frac{\sigma ^2}{2}). \end{aligned}$$
(10.21)

This is due to the assumptions that \(\alpha (t)\) is i.i.d. with mean \(\mu _\alpha \) and variance \(\sigma _\alpha ^2\) and \(\alpha (t)\) and e(t) are independently distributed. Now, consider

$$\begin{aligned}&\frac{1}{n^{m+1}} \sum _{t=1}^n t^m z_R(t) \cos (2\theta _1^0 t + 2\theta _2^0 t^2) \\= & {} \frac{1}{n^{m+1}} \sum _{t=1}^n t^m \alpha ^2(t)\cos ^2(2\theta _1^0 t + 2\theta _2^0 t^2) \\& + \frac{1}{n^{m+1}} \sum _{t=1}^n t^m (e_R^2(t) - e_I^2(t)) \cos (2\theta _1^0 t + 2\theta _2^0 t^2) \\&+ \frac{2}{n^{m+1}} \sum _{t=1}^n t^m \alpha (t) e_R(t) \cos (\theta _1^0 t + \theta _2^0 t^2) \cos (2\theta _1^0 t + 2\theta _2^0 t^2) \\&+ \frac{2}{n^{m+1}} \sum _{t=1}^n t^m \alpha (t) e_I(t) \sin (\theta _1^0 t + \theta _2^0 t^2) \cos (2\theta _1^0 t + 2\theta _2^0 t^2). \end{aligned}$$

The second term converges to zero as \(n\rightarrow \infty \) using Lemma 10.3 as \((e_R^2(t) - e_I^2(t))\) is a mean zero and finite variance process. Similarly, the third and fourth terms also converge to zero as \(n\rightarrow \infty \) using (10.21). Now the first term can be written as

$$\begin{aligned}&\frac{1}{n^{m+1}} \sum _{t=1}^n t^m \alpha ^2(t) \cos ^2(2\theta _1^0 t + 2\theta _2^0 t^2) \\= & {} \frac{1}{n^{m+1}} \left[ \sum _{t=1}^n t^m \{\alpha ^2(t)-E[\alpha ^2(t)]\} \cos ^2(2\theta _1^0 t + 2\theta _2^0 t^2) \right. \\& \left. + \sum _{t=1}^n t^m E[\alpha ^2(t)] \cos ^2(2\theta _1^0 t + 2\theta _2^0 t^2) \right] \\&{\mathop {\longrightarrow }\limits ^{a.s.}} 0 + \frac{1}{2(m+1)} E[\alpha ^2(t)] \\= & {} \frac{1}{2(m+1)} (\sigma _\alpha ^2 + \mu _\alpha ^2), \end{aligned}$$

using Lemmas 10.1 and 10.3. Note that here we have used the fact that the fourth moment of \(\alpha (t)\) exists. In a similar way, (10.18), (10.19) and (10.20) can be proved. \(\blacksquare \)

10.1.1 Proof of Consistency of \(\widehat{{\varvec{\theta }}}\):

Expanding \(Q({\varvec{\theta }})\) and using \(y^2(t)=z(t)=z_R(t)+i z_I(t)\)

$$\begin{aligned} \frac{1}{n} \bigl [ Q({\varvec{\theta }}) - Q({{\varvec{\theta }}}^0) \bigr ]= & {} \Bigl [ \frac{1}{n}\sum _{t=1}^n \Bigl \lbrace z_R(t) \cos (2\theta _1 t + 2\theta _2 t^2) + z_I(t) \sin (2\theta _1 t + 2\theta _2 t^2) \Bigr \rbrace \Bigl ] ^2 \\&+ \Bigl [ \frac{1}{n} \sum _{t=1}^n \Bigl \lbrace -z_R(t) \sin (2\theta _1 t + 2\theta _2 t^2) + z_I(t) \cos (2\theta _1 t + 2\theta _2 t^2) \Bigr \rbrace \Bigl ] ^2 \\&- \Bigl [ \frac{1}{n}\sum _{t=1}^n \Bigl \lbrace z_R(t) \cos (2\theta _1^0 t + 2\theta _2^0 t^2) + z_I(t) \sin (2\theta _1^0 t + 2\theta _2^0 t^2) \Bigr \rbrace \Bigl ] ^2 \\&- \Bigl [ \frac{1}{n}\sum _{t=1}^n \Bigl \lbrace -z_R(t) \sin (2\theta _1^0 t + 2\theta _2^0 t^2) + z_I(t) \cos (2\theta _1^0 t + 2\theta _2^0 t^2) \Bigr \rbrace \Bigl ] ^2 \\= & {} S_1 + S_2 + S_3 +S_4 \;\; (say) \end{aligned}$$

Using Lemma 10.5, we have,

$$\begin{aligned}&\frac{1}{n}\sum _{t=1}^n z_R(t) \cos (2\theta _1^0 t + 2\theta _2^0 t^2) {\mathop {\longrightarrow }\limits ^{a.s.}} \frac{1}{2} (\sigma _\alpha ^2 + \mu _\alpha ^2), \\&\frac{1}{n}\sum _{t=1}^n z_I(t) \cos (2\theta _1^0 t + 2\theta _2^0 t^2) {\mathop {\longrightarrow }\limits ^{a.s.}} 0, \\&\frac{1}{n}\sum _{t=1}^n z_R(t) \sin (2\theta _1^0 t + 2\theta _2^0 t^2) {\mathop {\longrightarrow }\limits ^{a.s.}} 0, \\&\frac{1}{n}\sum _{t=1}^n z_I(t) \sin (2\theta _1^0 t + 2\theta _2^0 t^2) {\mathop {\longrightarrow }\limits ^{a.s.}} \frac{1}{2} (\sigma _\alpha ^2 + \mu _\alpha ^2). \end{aligned}$$

Then

$$\begin{aligned} \lim _{n\rightarrow \infty } S_3 = - (\sigma _\alpha ^2 + \mu _\alpha ^2)^2\;\;\; \hbox {and} \;\;\; \lim _{n\rightarrow \infty } S_4 = 0. \end{aligned}$$

Now

$$\begin{aligned}&\overline{\lim }_{n\rightarrow \infty } \sup _{S_\varepsilon } S_1 \\= & {} \overline{\lim }_{n\rightarrow \infty } \sup _{S_\varepsilon } \Bigl [ \frac{1}{n}\sum _{t=1}^n \Bigl \lbrace z_R(t) \cos (2\theta _1 t + 2\theta _2 t^2) + z_I(t) \sin (2\theta _1 t + 2\theta _2 t^2) \Bigr \rbrace \Bigl ] ^2 \\= & {} \overline{\lim }_{n\rightarrow \infty } \sup _{S_\varepsilon } \Bigl [ \frac{1}{n}\sum _{t=1}^n \Bigl \lbrace \alpha ^2(t) \cos [2(\theta _1^0 - \theta _1)t + 2(\theta _2^0 - \theta _2)t^2] \\& + 2 e_R^2(t) e_I^2(t) \sin (2\theta _1 t + 2\theta _2 t^2) + (e_R^2(t) - e_I^2(t)) \cos (2\theta _1 t + 2\theta _2 t^2) \\& + 2 \alpha (t) e_R(t) \cos [(2\theta _1^0 - \theta _1)t + (2\theta _2^0 - \theta _2)t^2] \\& + 2 \alpha (t) e_I(t) \sin [(2\theta _1^0 - \theta _1)t + (2\theta _2^0 - \theta _2)t^2] \Bigr \rbrace \Bigl ] ^2\\= & {} {\overline{\lim }}_{n\rightarrow \infty } \sup _{|{{\varvec{\theta }}}^0 - {{\varvec{\theta }}}|> \varepsilon }\Bigl [ \frac{1}{n}\sum _{t=1}^n \Bigl \lbrace (\alpha ^2(t)- (\sigma _\alpha ^2 + \mu _\alpha ^2)) \cos [2(\theta _1^0 - \theta _1)t + 2(\theta _2^0 - \theta _2)t^2] \\& + 2 \alpha (t) e_R(t) \cos [(2\theta _1^0 - \theta _1)t + (2\theta _2^0 - \theta _2)t^2] \\& + 2 \alpha (t) e_I(t) \sin [(2\theta _1^0 - \theta _1)t + (2\theta _2^0 - \theta _2)t^2] \\& +(\sigma _\alpha ^2 + \mu _\alpha ^2) \cos [2(\theta _1^0 - \theta _1)t + 2(\theta _2^0 - \theta _2)t^2] \Bigr \rbrace \Bigl ] ^2 \\&(\hbox {Second and third terms are independent of}\; {{\varvec{\theta }}}^0 \hbox {and vanish using Lemma 10.3 })\\\longrightarrow & {} 0, \;\;\; \hbox {a.s.} \end{aligned}$$

using (10.21) and Lemmas 10.1 and 10.3. Similarly, we can show that \(\overline{\lim }_{n\rightarrow \infty } \sup _{S_\varepsilon } S_2 {\mathop {\longrightarrow }\limits ^{a.s.}} 0\). Therefore,

$$ \overline{\lim }_{n\rightarrow \infty } \sup _{S_\varepsilon } \frac{1}{n} \bigl [ Q({\varvec{\theta }}) - Q({{\varvec{\theta }}}^0) \bigr ] = \overline{\lim }_{n\rightarrow \infty } \sup _{S_\varepsilon } \Bigl [ \sum _{i=1}^4 S_i\Bigl ] \longrightarrow - (\sigma _\alpha ^2 + \mu _\alpha ^2)^2 < 0 \;\; \hbox {a.s.} $$

and using Lemma 10.2, \(\widehat{\theta }_1\) and \(\widehat{\theta }_2\) which maximize \(Q({{\varvec{\theta }}})\) are strongly consistent estimators of \(\theta _1^0\) and \(\theta _2^0\). \(\blacksquare \)

Appendix B

In this Appendix, we derive the asymptotic distribution of the estimator, discussed in Sect. 10.2, of the unknown parameters of model (10.6). The first order derivatives of \(Q({{\varvec{\theta }}})\) with respect to \(\theta _k\), \(k=1,2\) are as follows:

$$\begin{aligned} \frac{\partial Q({{\varvec{\theta }})}}{\partial \theta _k}= & {} \frac{2}{n} \left\{ \sum _{t=1}^n z_R(t) \cos (2\theta _1 t + 2\theta _2 t^2) + \sum _{t=1}^n z_I(t) \sin (2\theta _1 t + 2\theta _2 t^2) \right\} \times \nonumber \\&\left\{ \sum _{t=1}^n z_I(t) 2t^k \cos (2\theta _1 t + 2\theta _2 t^2) - \sum _{t=1}^n z_R(t) 2t^k \sin (2\theta _1 t + 2\theta _2 t^2) \right\} \nonumber \\&+ \; \frac{2}{n} \left\{ \sum _{t=1}^n z_I(t) \cos (2\theta _1 t + 2\theta _2 t^2) - \sum _{t=1}^n z_R(t) \sin (2\theta _1 t + 2\theta _2 t^2) \right\} \times \nonumber \\&\left\{ -\sum _{t=1}^n z_I(t) 2t^k \sin (2\theta _1 t + 2\theta _2 t^2) - \sum _{t=1}^n z_R(t) 2t^k \cos (2\theta _1 t + 2\theta _2 t^2) \right\} \nonumber \\= & {} \frac{2}{n} f_1({{\varvec{\theta }}}) g_1(k;{{\varvec{\theta }}}) + \frac{2}{n} f_2({{\varvec{\theta }}}) g_2(k;{{\varvec{\theta }}}), \;\; \hbox {(say)} \end{aligned}$$
(10.22)

where

$$\begin{aligned} f_1({{\varvec{\theta }}})= & {} \sum _{t=1}^n z_R(t) \cos (2\theta _1 t + 2\theta _2 t^2) + \sum _{t=1}^n z_I(t) \sin (2\theta _1 t + 2\theta _2 t^2), \\ g_1(k;{{\varvec{\theta }}})= & {} \sum _{t=1}^n z_I(t) 2t^k \cos (2\theta _1 t + 2\theta _2 t^2) - \sum _{t=1}^n z_R(t) 2t^k \sin (2\theta _1 t + 2\theta _2 t^2), \\ f_2({{\varvec{\theta }}})= & {} \sum _{t=1}^n z_I(t) \cos (2\theta _1 t + 2\theta _2 t^2) - \sum _{t=1}^n z_R(t) \sin (2\theta _1 t + 2\theta _2 t^2), \\ g_2(k;{{\varvec{\theta }}})= & {} -\sum _{t=1}^n z_I(t) 2t^k \sin (2\theta _1 t + 2\theta _2 t^2) - \sum _{t=1}^n z_R(t) 2t^k \cos (2\theta _1 t + 2\theta _2 t^2). \end{aligned}$$

Observe that using Lemma 10.5, it immediately follows that

$$\begin{aligned} (a) \; \lim _{n\rightarrow \infty } \frac{1}{n} f_1({{\varvec{\theta }}}^0) = (\sigma _\alpha ^2 + \mu _\alpha ^2)\;\;\;\hbox {and}\;\;\; (b)\; \lim _{n\rightarrow \infty } \frac{1}{n} f_2({{\varvec{\theta }}}^0) = 0 \;\;\hbox {a.s.} \end{aligned}$$
(10.23)

Therefore, for large n, \(\displaystyle \frac{\partial Q({{\varvec{\theta }})}}{\partial \theta _k} = \frac{2}{n} f_1({{\varvec{\theta }}}) g_1(k;{{\varvec{\theta }}})\), ignoring the second term in (10.22) which involves \(f_2({{\varvec{\theta }}})\). The second order derivatives with respect to \(\theta _k\) for \(k = 1, 2\) are

$$\begin{aligned} \frac{\partial ^2 Q({{\varvec{\theta }})}}{\partial \theta _k^2}= & {} \frac{2}{n} \left\{ -\sum _{t=1}^n z_R(t) 2t^k \sin (2\theta _1 t + 2\theta _2 t^2) + \sum _{t=1}^n z_I(t) 2t^k \cos (2\theta _1 t + 2\theta _2 t^2) \right\} ^2 \\&+ \;\frac{2}{n} \left\{ \sum _{t=1}^n z_R(t)\cos (2\theta _1 t + 2\theta _2 t^2) + \sum _{t=1}^n z_I(t)\sin (2\theta _1 t + 2\theta _2 t^2) \right\} \times \\&\left\{ -\sum _{t=1}^n z_R(t) 4t^{2k} \cos (2\theta _1 t + 2\theta _2 t^2) - \sum _{t=1}^n z_I(t) 4t^{2k} \sin (2\theta _1 t + 2\theta _2 t^2) \right\} \\&+ \;\frac{2}{n} \left\{ -\sum _{t=1}^n z_R(t) 2t^k \cos (2\theta _1 t + 2\theta _2 t^2) - \sum _{t=1}^n z_I(t) 2t^k \sin (2\theta _1 t + 2\theta _2 t^2) \right\} ^2 \\&+ \; \frac{2}{n} \left\{ -\sum _{t=1}^n z_R(t) \sin (2\theta _1 t + 2\theta _2 t^2) + \sum _{t=1}^n z_I(t) \cos (2\theta _1 t + 2\theta _2 t^2)\right\} \times \\&\left\{ \sum _{t=1}^n z_R(t) 4t^{2k} \sin (2\theta _1 t + 2\theta _2 t^2) - \sum _{t=1}^n z_I(t) 4t^{2k} \cos (2\theta _1 t + 2\theta _2 t^2) \right\} , \end{aligned}$$
$$\begin{aligned} \frac{\partial ^2 Q({{\varvec{\theta }})}}{\partial \theta _1 \partial \theta _2}= & {} \frac{2}{n} \left\{ \sum _{t=1}^n z_R(t)\cos (2\theta _1 t + 2\theta _2 t^2) + \sum _{t=1}^n z_I(t)\sin (2\theta _1 t + 2\theta _2 t^2) \right\} \times \\&\left\{ -\sum _{t=1}^n z_R(t) 4 t^3 \cos (2\theta _1 t + 2\theta _2 t^2) - \sum _{t=1}^n z_I(t) 4 t^3 \sin (2\theta _1 t + 2\theta _2 t^2)\right\} \\&+ \; \frac{2}{n} \left\{ -\sum _{t=1}^n z_R(t) 2t^2\sin (2\theta _1 t + 2\theta _2 t^2) + \sum _{t=1}^n z_I(t) 2t^2 \cos (2\theta _1 t + 2\theta _2 t^2) \right\} \times \\&\left\{ -\sum _{t=1}^n z_R(t) 2t\sin (2\theta _1 t + 2\theta _2 t^2) + \sum _{t=1}^n z_I(t) 2t \cos (2\theta _1 t + 2\theta _2 t^2) \right\} \\&+ \;\frac{2}{n} \left\{ - \sum _{t=1}^n z_R(t)\sin (2\theta _1 t + 2\theta _2 t^2) + \sum _{t=1}^n z_I(t)\cos (2\theta _1 t + 2\theta _2 t^2) \right\} \times \\&\left\{ \sum _{t=1}^n z_R(t) 4 t^3 \sin (2\theta _1 t + 2\theta _2 t^2) - \sum _{t=1}^n z_I(t) 4 t^3 \cos (2\theta _1 t + 2\theta _2 t^2)\right\} \\&+\; \frac{2}{n} \left\{ -\sum _{t=1}^n z_R(t) 2t^2\cos (2\theta _1 t + 2\theta _2 t^2) - \sum _{t=1}^n z_I(t) 2t^2 \sin (2\theta _1 t + 2\theta _2 t^2) \right\} \times \\&\left\{ -\sum _{t=1}^n z_R(t) 2t\cos (2\theta _1 t + 2\theta _2 t^2) - \sum _{t=1}^n z_I(t) 2t \sin (2\theta _1 t + 2\theta _2 t^2) \right\} . \end{aligned}$$

Now, we can show the following using Lemma 10.5,

$$\begin{aligned} \lim _{n\rightarrow \infty } \left. \frac{1}{n^3} \frac{\partial ^2 Q({{\varvec{\theta }})}}{\partial \theta _1^2} \right| _{{{\varvec{\theta }}}^0}= & {} -\frac{2}{3} (\sigma _\alpha ^2 + \mu _\alpha ^2)^2, \end{aligned}$$
(10.24)
$$\begin{aligned} \lim _{n\rightarrow \infty } \left. \frac{1}{n^5} \frac{\partial ^2 Q({{\varvec{\theta }})}}{\partial \theta _2^2} \right| _{{{\varvec{\theta }}}^0}= & {} -\frac{32}{45} (\sigma _\alpha ^2 + \mu _\alpha ^2)^2 , \end{aligned}$$
(10.25)
$$\begin{aligned} \lim _{n\rightarrow \infty } \left. \frac{1}{n^4} \frac{\partial ^2 Q({{\varvec{\theta }})}}{\partial \theta _1 \partial \theta _2} \right| _{{{\varvec{\theta }}}^0}= & {} -\frac{2}{3} (\sigma _\alpha ^2 + \mu _\alpha ^2)^2. \end{aligned}$$
(10.26)

Write \(\displaystyle Q'({{\varvec{\theta }}}) = \left( \frac{\partial Q({{\varvec{\theta }})}}{\partial \theta _1}, \frac{\partial Q({{\varvec{\theta }})}}{\partial \theta _2} \right) \) and . Define a diagonal matrix \(\mathbf{D} = \hbox {diag}\left\{ \frac{1}{n^{\frac{3}{2}}}, \frac{1}{n^{\frac{5}{2}}} \right\} \). Expand \(Q'(\widehat{{\varvec{\theta }}})\) using bivariate Taylor series expansion around \({{\varvec{\theta }}}^0\),

$$ Q'(\widehat{{\varvec{\theta }}}) - Q'({{\varvec{\theta }}}^0) = (\widehat{{\varvec{\theta }}} - {{\varvec{\theta }}}^0) Q''(\bar{{\varvec{\theta }}}), $$

where \(\bar{{\varvec{\theta }}}\) is a point on the line joining \(\widehat{{\varvec{\theta }}}\) and \({{\varvec{\theta }}}^{0}\). As \(\widehat{{\varvec{\theta }}}\) maximizes \(Q({{\varvec{\theta }}})\), \(Q'(\widehat{{\varvec{\theta }}}) =0\), the above equation can be written as

$$\begin{aligned}&-[Q'({{\varvec{\theta }}}^0) \mathbf{D}] = (\widehat{{\varvec{\theta }}} - {{\varvec{\theta }}}^0) \mathbf{D}^{-1} \mathbf{D} Q''(\bar{{\varvec{\theta }}}) \mathbf{D} \\\Rightarrow & {} (\widehat{{\varvec{\theta }}} - {{\varvec{\theta }}}^0) \mathbf{D}^{-1} = -[Q'({{\varvec{\theta }}}^0) \mathbf{D}][\mathbf{D}Q''(\bar{{\varvec{\theta }}}) \mathbf{D}]^{-1}, \end{aligned}$$

provided \([\mathbf{D}Q''(\bar{{\varvec{\theta }}}) \mathbf{D}]\) is an invertible matrix a.s. Because \(\widehat{{\varvec{\theta }}} \rightarrow {{\varvec{\theta }}}^0\) a.s. and \(Q''({{\varvec{\theta }}})\) is a continuous function of \({{\varvec{\theta }}}\), using continuous mapping theorem, we have

$$ \lim _{n\rightarrow \infty } [\mathbf{D}Q''(\bar{{\varvec{\theta }}}) \mathbf{D}] = \lim _{n\rightarrow \infty } [\mathbf{D}Q''({{\varvec{\theta }}}^0) \mathbf{D}] = -{{\varvec{\varSigma }}}, $$

where \({{\varvec{\varSigma }}}\) can be obtained using (10.24)-(10.26) as \(\displaystyle {{\varvec{\varSigma }}} = \frac{2(\sigma _\alpha ^2 + \mu _\alpha ^2)^2}{3} \left( \begin{matrix}1 &{} 1\\ 1 &{} \frac{16}{15}\end{matrix} \right) .\) Using (10.23), the elements of \(Q'({{\varvec{\theta }}}^0) \mathbf{D}\) are

$$\begin{aligned} \frac{1}{n^{\frac{3}{2}}} \frac{\partial Q({{\varvec{\theta }}}^0)}{\partial \theta _1} = 2 \frac{1}{n} f_1({{\varvec{\theta }}}^0) \frac{1}{n^{\frac{3}{2}}} g_1(1;{{\varvec{\theta }}}^0) \;\; \hbox {and} \frac{1}{n^{\frac{5}{2}}} \frac{\partial Q({{\varvec{\theta }}}^0)}{\partial \theta _2} = 2 \frac{1}{n} f_1({{\varvec{\theta }}}^0) \frac{1}{n^{\frac{5}{2}}} g_1(2;{{\varvec{\theta }}}^0), \end{aligned}$$

for large n. Therefore, to find the asymptotic distribution of \(Q'({{\varvec{\theta }}}^0) \mathbf{D}\), we need to study the large sample distribution of \(\frac{1}{n^{\frac{3}{2}}} g_1(1;{{\varvec{\theta }}}^0)\) and \(\frac{1}{n^{\frac{5}{2}}} g_1(2;{{\varvec{\theta }}}^0)\). Replacing \(z_R(t)\) and \(z_I(t)\) in \(g_1(k;{{\varvec{\theta }}}^0)\), \(k=1,2\), we have

$$\begin{aligned}&\frac{1}{n^{\frac{2k+1}{2}}}g_1(k;{{\varvec{\theta }}}^0) \\= & {} \frac{2}{n^{\frac{2k+1}{2}}} \sum _{t=1}^n t^k z_I(t) \cos (2\theta _1^0 t + 2 \theta _2^0 t^2) - \frac{2}{n^{\frac{2k+1}{2}}} \sum _{t=1}^n t^k z_R(t) \sin (2\theta _1^0 t + 2 \theta _2^0 t^2) \\= & {} \frac{4}{n^{\frac{2k+1}{2}}} \sum _{t=1}^n t^k e_R(t) e_I(t) \cos (2\theta _1^0 t + 2 \theta _2^0 t^2) + \frac{4}{n^{\frac{2k+1}{2}}} \sum _{t=1}^n t^k \alpha (t) e_I(t) \cos (\theta _1^0 t + \theta _2^0 t^2) \\&- \frac{4}{n^{\frac{2k+1}{2}}} \sum _{t=1}^n t^k \alpha (t) e_R(t) \sin (\theta _1^0 t + \theta _2^0 t^2) \\&- \frac{2}{n^{\frac{2k+1}{2}}} \sum _{t=1}^n t^k (e_R^2(t) - e_I^2(t)) \sin (2\theta _1^0 t + 2\theta _2^0 t^2). \end{aligned}$$

The random variables \(e_R(t) e_I(t)\), \(\alpha (t) e_R(t)\), \( \alpha (t) e_I(t)\) and \((e_R^2(t) - e_I^2(t))\) are all mean zero finite variance random variables. Therefore, \(E[\frac{1}{n^{\frac{3}{2}}}g_1(1;{{\varvec{\theta }}}^0)] =0\) and \(E[\frac{1}{n^{\frac{5}{2}}}g_1(2;{{\varvec{\theta }}}^0)] =0\) for large n and all the terms above satisfy the Lindeberg-Feller’s condition. So, \(\frac{1}{n^{\frac{3}{2}}}g_1(1;{{\varvec{\theta }}}^0)\) and \(\frac{1}{n^{\frac{5}{2}}}g_1(2;{{\varvec{\theta }}}^0)\) converge to normal distributions with zero mean and finite variances. In order to find the large sample covariance matrix of \(Q'({{\varvec{\theta }}}^0) \mathbf{D}\), we first find the variances of \(\frac{1}{n^{\frac{3}{2}}}g_1(1;{{\varvec{\theta }}}^0)\) and \(\frac{1}{n^{\frac{5}{2}}}g_1(2;{{\varvec{\theta }}}^0)\) and their covariance.

$$\begin{aligned}&\hbox {Var}\Bigl [ \frac{1}{n^{\frac{3}{2}}}g_1(1;{{\varvec{\theta }}}^0) \Bigl ] \\= & {} \hbox {Var}\Bigl [ \frac{4}{n^{\frac{3}{2}}} \sum _{t=1}^n t e_R(t) e_I(t) \cos (2\theta _1^0 t + 2 \theta _2^0 t^2) + \frac{4}{n^{\frac{3}{2}}} \sum _{t=1}^n t \alpha (t) e_I(t) \cos (\theta _1^0 t + \theta _2^0 t^2) \\&- \frac{4}{n^{\frac{3}{2}}} \sum _{t=1}^n t \alpha (t) e_R(t) \sin (\theta _1^0 t + \theta _2^0 t^2) - \frac{2}{n^{\frac{3}{2}}} \sum _{t=1}^n t (e_R^2(t) - e_I^2(t)) \sin (2\theta _1^0 t + 2\theta _2^0 t^2) \Bigl ] \\= & {} E\Bigl [ \frac{16}{n^3} \sum _{t=1}^n t^2 e_R^2(t) e_I^2(t) \cos ^2(2\theta _1^0 t + 2 \theta _2^0 t^2) + \frac{16}{n^3} \sum _{t=1}^n t^2 \alpha ^2(t) e_I^2(t) \cos ^2(\theta _1^0 t + \theta _2^0 t^2) \\&+ \frac{16}{n^3} \sum _{t=1}^n t^2 \alpha ^2(t) e_R^2(t) \sin ^2(\theta _1^0 t + \theta _2^0 t^2) \\&+ \frac{4}{n^3} \sum _{t=1}^n t^2 (e_R^2(t) - e_I^2(t))^2 \sin ^2(2\theta _1^0 t + 2 \theta _2^0 t^2) \Bigl ] \\&\hbox {(The cross-product terms vanish due to Lemma 10.1 and independence of}\\& \alpha (t), e_R(t) \text{ and } e_I(t).) \\\longrightarrow & {} 16 . \frac{\sigma ^2}{2} . \frac{\sigma ^2}{2}. \frac{1}{6} + 16 . \frac{\sigma ^2}{2} . (\sigma _\alpha ^2 + \mu _\alpha ^2). \frac{1}{6} + 16 . \frac{\sigma ^2}{2} . (\sigma _\alpha ^2 + \mu _\alpha ^2). \frac{1}{6} + 4. (2\gamma - \frac{\sigma ^4}{2}) \frac{1}{6}\\= & {} \frac{8}{3}\bigl [ (\sigma _\alpha ^2 + \mu _\alpha ^2) \sigma ^2 + \frac{1}{2} \gamma + \frac{1}{8} \sigma ^4 \bigr ] . \end{aligned}$$

Similarly, we can show that for large n

$$\begin{aligned} \hbox {Var}\Bigl [ \frac{1}{n^{\frac{5}{2}}}g_1(2;{{\varvec{\theta }}}^0) \Bigl ] \longrightarrow \frac{8}{5}\bigl [ (\sigma _\alpha ^2 + \mu _\alpha ^2) \sigma ^2 + \frac{1}{2} \gamma + \frac{1}{8} \sigma ^4 \bigr ] , \\ \hbox {Cov}\Bigl [ \frac{1}{n^{\frac{3}{2}}}g_1(1;{{\varvec{\theta }}}^0), \frac{1}{n^{\frac{5}{2}}}g_1(2;{{\varvec{\theta }}}^0) \Bigl ] \longrightarrow 2 \bigl [ (\sigma _\alpha ^2 + \mu _\alpha ^2) \sigma ^2 + \frac{1}{2} \gamma + \frac{1}{8} \sigma ^4 \bigr ] . \end{aligned}$$

Now, note that \(Q'({{\varvec{\theta }}}^0) \mathbf{D}\) can be written as

$$ Q'({{\varvec{\theta }}}^0) \mathbf{D} = \frac{2}{n} f_1({{\varvec{\theta }}}^0) \Bigl [ \frac{1}{n^{\frac{3}{2}}}g_1(1;{{\varvec{\theta }}}^0), \frac{1}{n^{\frac{5}{2}}}g_1(2;{{\varvec{\theta }}}^0) \Bigl ] . $$

Then, as \(n\rightarrow \infty \), \(\frac{2}{n} f_1({{\varvec{\theta }}}^0) {\mathop {\longrightarrow }\limits ^{a.s.}} 2 (\sigma _\alpha ^2 + \mu _\alpha ^2)\) and

$$ \Bigl [ \frac{1}{n^{\frac{3}{2}}}g_1(1;{{\varvec{\theta }}}^0), \frac{1}{n^{\frac{5}{2}}}g_1(2;{{\varvec{\theta }}}^0) \Bigl ] {\mathop {\longrightarrow }\limits ^{d}} \mathcal {N}_2(\mathbf{0}, {{\varvec{\varGamma }}}), $$

where

Therefore, using Slutsky’s theorem, as \(n\rightarrow \infty \),

$$ Q'({{\varvec{\theta }}}^0) \mathbf{D} {\mathop {\longrightarrow }\limits ^{d}} \mathcal {N}_2(\mathbf{0}, 4(\sigma _\alpha ^2 + \mu _\alpha ^2)^2{{\varvec{\varGamma }}}), $$

and hence

$$ (\widehat{{\varvec{\theta }}} - {{\varvec{\theta }}}^0) \mathbf{D}^{-1} {\mathop {\longrightarrow }\limits ^{d}} \mathcal {N}_2(\mathbf{0}, 4(\sigma _\alpha ^2 + \mu _\alpha ^2)^2{{\varvec{\varSigma }}}^{-1}{{\varvec{\varGamma }}}{{\varvec{\varSigma }}}^{-1}). $$

This is the asymptotic distribution stated in Sect. 10.2. \(\blacksquare \)

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Nandi, S., Kundu, D. (2020). Random Amplitude Sinusoidal and Chirp Model. In: Statistical Signal Processing. Springer, Singapore. https://doi.org/10.1007/978-981-15-6280-8_10

Download citation

Publish with us

Policies and ethics