Skip to main content

Fundamental Frequency Model and Its Generalization

  • Chapter
  • First Online:
Statistical Signal Processing
  • 736 Accesses

Abstract

In this chapter we have discussed the fundamental frequency model (FFM) and the generalized fundamental frequency model (GFFM). Both these models are special cases of the sinusoidal frequency model. But many real-life phenomena can be analyzed using such special models. In estimating unknown parameters of multiple sinusoidal model, there are several algorithms available, but the computational loads of these algorithms are usually quite high. Therefore, the FFM and the GFFM are very convenient approximations where inherent frequencies are harmonics of a fundamental frequency. We have discussed different developments of these models both from the classical and Bayesian points of view.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 129.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. An, H.-Z., Chen, Z.-G., & Hannan, E. J. (1983). The maximum of the periodogram. Journal of Multi-Criteria Decision Analysis, 13, 383–400.

    Article  MathSciNet  Google Scholar 

  2. Baldwin, A. J., & Thomson, P. J. (1978). Periodogram analysis of S. Carinae. Royal Astronomical Society of New Zealand (Variable Star Section), 6, 31–38.

    Google Scholar 

  3. Bloomfiled, P. (1976). Fourier analysis of time series. An introduction. New York: Wiley.

    Google Scholar 

  4. Bretthorst, G. L. (1988). Bayesian spectrum analysis and parameter estimation. Berlin: Springer.

    Book  Google Scholar 

  5. Brown, E. N., & Czeisler, C. A. (1992). The statistical analysis of circadian phase and amplitude in constant-routine core-temperature data. Journal of Biological Rhythms, 7, 177–202.

    Article  Google Scholar 

  6. Brown, E. N., & Liuthardt, H. (1999). Statistical model building and model criticism for human circadian data. Journal of Biological Rhythms, 14, 609–616.

    Article  Google Scholar 

  7. Greenhouse, J. B., Kass, R. E., & Tsay, R. S. (1987). Fitting nonlinear models with ARMA errors to biological rhythm data. Statistics in Medicine, 6, 167–183.

    Article  Google Scholar 

  8. Hannan, E. J. (1971). Non-linear time series regression. Journal of Applied Probability, 8, 767–780.

    Article  MathSciNet  Google Scholar 

  9. Hannan, E. J. (1973). The estimation of frequency. Journal of Applied Probability, 10, 510–519.

    Article  MathSciNet  Google Scholar 

  10. Irizarry, R. A. (2000). Asymptotic distribution of estimates for a time-varying parameter in a harmonic model with multiple fundamentals. Statistica Sinica, 10, 1041–1067.

    MathSciNet  MATH  Google Scholar 

  11. Kundu, D. (1997). Asymptotic properties of the least squares estimators of sinusoidal signals. Statistics, 30, 221–238.

    Article  MathSciNet  Google Scholar 

  12. Kundu, D., & Mitra, A. (2001). Estimating the number of signals of the damped exponential models. Computational Statistics & Data Analysis, 36, 245–256.

    Article  MathSciNet  Google Scholar 

  13. Kundu, D., & Nandi, S. (2005). Estimating the number of components of the fundamental frequency model. Journal of the Japan Statistical Society, 35(1), 41–59.

    Article  MathSciNet  Google Scholar 

  14. Nandi, S. (2002). Analyzing some non-stationary signal processing models. Ph.D. Thesis, Indian Institute of Technology Kanpur.

    Google Scholar 

  15. Nandi, S., & Kundu, D. (2003). Estimating the fundamental frequency of a periodic function. Statistical Methods and Applications, 12, 341–360.

    Article  MathSciNet  Google Scholar 

  16. Nandi, S., & Kundu, D. (2006). Analyzing non-stationary signals using a cluster type model. Journal of Statistical Planning and Inference, 136, 3871–3903.

    Article  MathSciNet  Google Scholar 

  17. Nielsen, J. K., Christensen, M. G. and Jensen, S. H. (2013). Default Bayesian estimation of the fundamental frequency. IEEE Transactions Audio, Speech and Language Processing 21(3), 598–610.

    Google Scholar 

  18. Quinn, B. G., & Thomson, P. J. (1991). Estimating the frequency of a periodic function. Biometrika, 78, 65–74.

    Article  MathSciNet  Google Scholar 

  19. Richards, F. S. G. (1961). A method of maximum likelihood estimation. Journal of the Royal Statistical Society, B, 469–475.

    Google Scholar 

  20. Zellner, A. (1986). On assessing prior distributions and Bayesian regression analysis with g-prior distributions. In P. K. Goel & A. Zellner (Eds.) Bayesian inference and decision techniques. The Netherlands: Elsevier.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Swagata Nandi .

Appendix A

Appendix A

In order to prove Theorem 6.5, we need the following lemmas.

Lemma 6.1

(An, Chen, and Hannan [1]) Define,

$$ I_X(\lambda ) = \left| \frac{1}{n} \sum _{t=1}^n X(t) e^{i t \lambda } \right| ^2. $$

If \(\{X(t)\}\) satisfies Assumption 6.1, then

$$\begin{aligned} \lim \sup _{n \rightarrow \infty } \max _{\lambda } \frac{n I_X(\lambda )}{\sigma ^2 \log n} \le 1 \hbox {a.s.} \end{aligned}$$
(6.31)

Lemma 6.2

(Kundu [11]) If \(\{X(t)\}\) satisfies Assumption 6.1, then

$$ \lim _{n \rightarrow \infty } \sup _{\lambda } \frac{1}{n} \left| \sum _{t=1}^n X(t) e^{i \lambda t} \right| = 0 \hbox {a.s.} $$

6.1.1 Proof of Theorem 6.5

Observe that, we need to show

$$ IC(0)> IC(1)> \cdots> IC(p^0-1) > IC(p^0) < IC(p^0+1). $$

Consider two cases separately.

Case I: \(L < p^0\)

$$\begin{aligned}\lim _{n \rightarrow \infty } R(L)= & {} \lim _{n \rightarrow \infty } \frac{1}{n} \sum _{t=1}^n \left( y(t) - \sum _{j=1}^L \widehat{\rho }_j \cos (j \widehat{\lambda } t - \widehat{\phi }_j) \right) ^2 \\= & {} \lim _{n \rightarrow \infty } \left[ \frac{1}{n} \mathbf{Y}^T \mathbf{Y} - 2 \sum _{j=1}^L \left| \frac{1}{n} \sum _{t=1}^n y(t) e^{i j t \widehat{\lambda }} \right| ^2 \right] = \sigma ^2 + \sum _{j=L+1}^{p^0} \rho _j^{0^2} \hbox {a.s.} \end{aligned}$$

Therefore, for \(0 \le L < p^0-1\),

$$\begin{aligned}&\lim _{n \rightarrow \infty } \frac{1}{n} \left[ IC(L) - IC(L+1) \right] = \nonumber \\&\lim _{n \rightarrow \infty } \left[ \log \left( \sigma ^2 + \sum _{j=L+1}^{p^0} \rho _j^{0^2} \right) - \log \left( \sigma ^2 + \sum _{j=L+2}^{p^0} \rho _j^{0^2} \right) - \frac{2 C_n}{n} \right] \hbox {a.s.} \nonumber \\ \end{aligned}$$
(6.32)

and for \(L = p^0 -1\),

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1}{n} \left[ IC(p^0-1) - IC(p^0) \right] = \lim _{n \rightarrow \infty } \left[ \log \left( \sigma ^2 + \rho _{p^0}^{0^2} \right) - \log \sigma ^2 - \frac{2 C_n}{n} \right] \hbox {a.s.} \end{aligned}$$
(6.33)

Since \(\frac{C_n}{n} \rightarrow 0\), therefore as \(n \rightarrow \infty \) for \(0 \le L \le p^0-1\),

$$ \lim _{n \rightarrow \infty } \frac{1}{n} \left[ IC(L) - IC(L+1) \right] > 0. $$

It implies that for large n, \(IC(L) > IC(L+1)\), when \(0 \le L \le p^0-1\).

Case II: \(L = p^0+1\).

Now consider

$$\begin{aligned} R(p^0 + 1) = \frac{1}{n} \mathbf{Y}^T \mathbf{Y} - 2 \sum _{j=1}^{p^0} \left| \frac{1}{n} \sum _{t=1}^n y(t) e^{i \widehat{\lambda } j t} \right| ^2 - 2 \left| \frac{1}{n} \sum _{t=1}^n y(t) e^{i \widehat{\lambda } (p^0+1) t} \right| ^2. \end{aligned}$$
(6.34)

Note that \(\widehat{\lambda } \rightarrow \lambda ^0\) a.s. as \(n \rightarrow \infty \) (Nandi [14]). Therefore, for large n

$$\begin{aligned}&IC(p^0+1) - IC(p^0) \\= & {} n \left( \log R(p^0+1) - \log R(p^0) \right) + 2 C_n = n \left[ \log \frac{R(p^0+1)}{R(p^0)} \right] + 2 C_n \\\approx & {} n \left[ \log \left( 1 - \frac{2 \left| \frac{1}{n} \sum _{t=1}^n y(t) e^{i \lambda ^0 (p^0+1) t} \right| ^2}{\sigma ^2} \right) \right] + 2 C_n \hbox { (using lemma 6.2)} \\\approx & {} 2 \log n \left[ \frac{C_n}{\log n} - \frac{n \left| \frac{1}{n} \sum _{t=1}^n X(t) e^{i \lambda ^0 (p^0+1) t} \right| ^2}{\sigma ^2 \log n} \right] \\= & {} 2 \log n \left[ \frac{C_n}{\log n} - \frac{n I_X(\lambda ^0(p^0+1))}{\sigma ^2 \log n} \right] > 0 \quad \hbox {a.s.} \end{aligned}$$

Note that the last inequality follows because of the properties of \(C_n\) and due to Lemma 6.1.    \(\blacksquare \)

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Nandi, S., Kundu, D. (2020). Fundamental Frequency Model and Its Generalization. In: Statistical Signal Processing. Springer, Singapore. https://doi.org/10.1007/978-981-15-6280-8_6

Download citation

Publish with us

Policies and ethics