Spectral and Temporal Envelope Cues for Human and Automatic Speech Recognition in Noise

  • Guangxin Hu
  • Sarah C. Determan
  • Yue Dong
  • Alec T. Beeve
  • Joshua E. Collins
  • Yan GaiEmail author
Research Article


Acoustic features of speech include various spectral and temporal cues. It is known that temporal envelope plays a critical role for speech recognition by human listeners, while automated speech recognition (ASR) heavily relies on spectral analysis. This study compared sentence-recognition scores of humans and an ASR software, Dragon, when spectral and temporal-envelope cues were manipulated in background noise. Temporal fine structure of meaningful sentences was reduced by noise or tone vocoders. Three types of background noise were introduced: a white noise, a time-reversed multi-talker noise, and a fake-formant noise. Spectral information was manipulated by changing the number of frequency channels. With a 20-dB signal-to-noise ratio (SNR) and four vocoding channels, white noise had a stronger disruptive effect than the fake-formant noise. The same observation with 22 channels was made when SNR was lowered to 0 dB. In contrast, ASR was unable to function with four vocoding channels even with a 20-dB SNR. Its performance was least affected by white noise and most affected by the fake-formant noise. Increasing the number of channels, which improved the spectral resolution, generated non-monotonic behaviors for the ASR with white noise but not with colored noise. The ASR also showed highly improved performance with tone vocoders. It is possible that fake-formant noise affected the software’s performance by disrupting spectral cues, whereas white noise affected performance by compromising speech segmentation. Overall, these results suggest that human listeners and ASR utilize different listening strategies in noise.


noise vocoding tone vocoding speech recognition formants spectral temporal automated speech recognition speech segmentation 



We thank L. Carney for offering significant input on the manuscript. We thank L. Calandruccio for providing the sentences. We also thank the reviewers for providing tremendous help and insights to the manuscript.

Compliance with Ethical Standards

Conflict of Interest

The authors declare that they have no conflict of interest.


  1. Ali A (1999) Auditory-based acoustic-phonetic signal processing for robust continuous speech recognition. PhD thesis, University of PennsylvaniaGoogle Scholar
  2. Allen JB (1995) How do humans process and recognize speech? In: Ramachandran RP, Mammone RJ (eds) Modern methods of speech processing. Springer US, Boston, pp 251–275CrossRefGoogle Scholar
  3. Ardoint M, Agus T, Sheft S, Lorenzi C (2011) Importance of temporal-envelope speech cues in different spectral regions. J Acoust Soc Am 130:EL115–EL121PubMedCrossRefPubMedCentralGoogle Scholar
  4. Atal BS, Hanaver SL (1971) Speech analysis and synthesis by linear prediction of the speech wave. J Acoust Soc Am 50:637–655PubMedCrossRefPubMedCentralGoogle Scholar
  5. Baken R, Orlikoff R (2000) Clinical measurement of speech and voice, 2nd edn. Singular Publishing Group Thomson Learning, San DiegoGoogle Scholar
  6. Baker J (1975) The DRAGON system—an overview. IEEE Transactions on Acoustics, Speech, and Signal Processing 23:24–29CrossRefGoogle Scholar
  7. Beekhuizen B, Bod R, Zuidema W (2013) Three design principles of language: the search for parsimony in redundancy. Lang Speech 56:265–290PubMedCrossRefPubMedCentralGoogle Scholar
  8. Bregman AS, Pinker S (1978) Auditory streaming and the building of timbre. Can J Psychol 32:19–31PubMedCrossRefPubMedCentralGoogle Scholar
  9. Calandruccio L, Smiljanic R (2012) New sentence recognition materials developed using a basic non-native English lexicon. J Speech Lang Hear Res 55:1342–1355PubMedCrossRefPubMedCentralGoogle Scholar
  10. Cooke M (2006) A glimpsing model of speech perception in noise. J Acoust Soc Am 119:1562–1573PubMedCrossRefPubMedCentralGoogle Scholar
  11. Davis SB, Mermelstein P (1990) Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. In: Alex W, Kai-Fu L (eds) Readings in speech recognition. Morgan Kaufmann Publishers Inc., pp 65–74Google Scholar
  12. Do CT, Pastor D, Goalic A (2010) On the recognition of cochlear implant-like spectrally reduced speech with MFCC and HMM-based ASR. IEEE Transactions on Audio, Speech, and Language Processing 18:1065–1068CrossRefGoogle Scholar
  13. Dorman MF, Loizou PC, Rainey D (1997a) Speech intelligibility as a function of the number of channels of stimulation for signal processors using sine-wave and noise-band outputs. J Acoust Soc Am 102:2403–2411PubMedCrossRefPubMedCentralGoogle Scholar
  14. Dorman MF, Loizou PC, Rainey D (1997b) Simulating the effect of cochlear-implant electrode insertion depth on speech understanding. J Acoust Soc Am 102:2993–2996PubMedCrossRefPubMedCentralGoogle Scholar
  15. Eisenberg LS, Shannon RV, Martinez AS, Wygonski J, Boothroyd A (2000) Speech recognition with reduced spectral cues as a function of age. J Acoust Soc Am 107:2704–2710PubMedCrossRefPubMedCentralGoogle Scholar
  16. Friesen LM, Shannon RV, Baskent D, Wang X (2001) Speech recognition in noise as a function of the number of spectral channels: comparison of acoustic hearing and cochlear implants. J Acoust Soc Am 110:1150–1163PubMedCrossRefPubMedCentralGoogle Scholar
  17. Gelfer MP, Mikos VA (2005) The relative contributions of speaking fundamental frequency and formant frequencies to gender identification based on isolated vowels. J Voice 19:544–554PubMedCrossRefPubMedCentralGoogle Scholar
  18. Ghitza O (2001) On the upper cutoff frequency of the auditory critical-band envelope detectors in the context of speech perception. J Acoust Soc Am 110:1628–1640PubMedCrossRefPubMedCentralGoogle Scholar
  19. Gilbert G, Lorenzi C (2006) The ability of listeners to use recovered envelope cues from speech fine structure. J Acoust Soc Am 119:2438–2444PubMedCrossRefPubMedCentralGoogle Scholar
  20. Glasberg BR, Moore BC (1990) Derivation of auditory filter shapes from notched-noise data. Hear Res 47:103–138PubMedCrossRefPubMedCentralGoogle Scholar
  21. Heinz MG, Swaminathan J (2009) Quantifying envelope and fine-structure coding in auditory nerve responses to chimaeric speech. J Assoc Res Otolaryngol 10:407–423PubMedPubMedCentralCrossRefGoogle Scholar
  22. Juneja A, Espy-Wilson C (2008) A probabilistic framework for landmark detection based on phonetic features for automatic speech recognition. J Acoust Soc Am 123:1154–1168PubMedCrossRefPubMedCentralGoogle Scholar
  23. Liu C, Fu QJ (2007) Estimation of vowel recognition with cochlear implant simulations. IEEE Trans Biomed Eng 54:74–81PubMedCrossRefPubMedCentralGoogle Scholar
  24. Lock RH, Lock PF, Morgan KL, Lock EF, Lock DF (2017) Statistics: unlocking the power of data, 2nd edn. Wiley, NJGoogle Scholar
  25. Loizou PC, Dorman M, Tu Z (1999) On the number of channels needed to understand speech. J Acoust Soc Am 106:2097–2103PubMedCrossRefPubMedCentralGoogle Scholar
  26. Makhoul J (1975) Linear prediction: a tutorial review. Proc IEEE 63:561–580CrossRefGoogle Scholar
  27. Mao J, Carney LH (2014) Binaural detection with narrowband and wideband reproducible noise maskers. IV. Models using interaural time, level, and envelope differences. J Acoust Soc Am 135:824–837PubMedPubMedCentralCrossRefGoogle Scholar
  28. Mao J, Carney LH (2015) Tone-in-noise detection using envelope cues: comparison of signal-processing-based and physiological models. J Assoc Res Otolaryngol 16:121–133PubMedCrossRefPubMedCentralGoogle Scholar
  29. Mao J, Koch KJ, Doherty KA, Carney LH (2015) Cues for diotic and dichotic detection of a 500-Hz tone in noise vary with hearing loss. J Assoc Res Otolaryngol 16:507–521PubMedPubMedCentralCrossRefGoogle Scholar
  30. Qin MK, Oxenham AJ (2003) Effects of simulated cochlear-implant processing on speech reception in fluctuating maskers. J Acoust Soc Am 114:446–454PubMedCrossRefPubMedCentralGoogle Scholar
  31. Rader T, Adel Y, Fastl H, Baumann U (2015) Speech perception with combined electric-acoustic stimulation: a simulation and model comparison. Ear Hear 36:e314–e325PubMedCrossRefPubMedCentralGoogle Scholar
  32. Rao A, Kumaresan R (2000) On decomposing speech into modulated components. IEEE Trans Speech Audio Process 8:240–254CrossRefGoogle Scholar
  33. Reddy DR (1976) Speech recognition by machine: a review. Proc IEEE 64:501–531CrossRefGoogle Scholar
  34. Roberts B, Summers RJ, Bailey PJ (2011) The intelligibility of noise-vocoded speech: spectral information available from across-channel comparison of amplitude envelopes. Proc Biol Sci 278:1595–1600PubMedCrossRefPubMedCentralGoogle Scholar
  35. Rosen S (1992) Temporal information in speech: acoustic, auditory and linguistic aspects. Philos Trans R Soc Lond B Biol Sci 336:367–373PubMedCrossRefPubMedCentralGoogle Scholar
  36. Schnupp J, Nelken I, King AJ (2012) Auditory neuroscience: making sense of sound. MIT Press, CambridgeGoogle Scholar
  37. Shannon RV, Zeng FG, Kamath V, Wygonski J, Ekelid M (1995) Speech recognition with primarily temporal cues. Science 270:303–304PubMedCrossRefPubMedCentralGoogle Scholar
  38. Shannon RV, Fu QJ, Galvin J, 3rd (2004) The number of spectral channels required for speech recognition depends on the difficulty of the listening situation. Acta Otolaryngol Suppl:50–54CrossRefGoogle Scholar
  39. Smith ZM, Delgutte B, Oxenham AJ (2002) Chimaeric sounds reveal dichotomies in auditory perception. Nature 416:87–90PubMedPubMedCentralCrossRefGoogle Scholar
  40. Stilp CE (2011) The redundancy of phonemes in sentential context. J Acoust Soc Am 130:EL323–EL328PubMedPubMedCentralCrossRefGoogle Scholar
  41. Swaminathan J, Reed CM, Desloge JG, Braida LD, Delhorne LA (2014) Consonant identification using temporal fine structure and recovered envelope cues. J Acoust Soc Am 135:2078–2090PubMedPubMedCentralCrossRefGoogle Scholar
  42. Whitmal NA, Poissant SF, Freyman RL, Helfer KS (2007) Speech intelligibility in cochlear implant simulations: effects of carrier type, interfering noise, and subject experience. J Acoust Soc Am 122:2376–2388PubMedCrossRefPubMedCentralGoogle Scholar
  43. Zeng FG, Nie K, Liu S, Stickney G, Del Rio E, Kong YY, Chen H (2004) On the dichotomy in auditory perception between temporal envelope and fine structure cues. J Acoust Soc Am 116:1351–1354PubMedCrossRefPubMedCentralGoogle Scholar
  44. Zeng FG, Nie K, Stickney GS, Kong YY, Vongphoe M, Bhargave A, Wei C, Cao K (2005) Speech recognition with amplitude and frequency modulations. Proc Natl Acad Sci U S A 102:2293–2298PubMedPubMedCentralCrossRefGoogle Scholar

Copyright information

© Association for Research in Otolaryngology 2019

Authors and Affiliations

  1. 1.Biomedical Engineering DepartmentSaint Louis UniversitySt LouisUSA

Personalised recommendations