Algorithms to Automate Estimation of Time Codes for Captioning Digital Media
Procedures were developed to partially automate the captioning process by estimating caption time codes using plain-text transcripts and audio recordings. Signal analysis is performed on the audio to measure pause location and duration, zero crossing rate (ZCR), and obtain frequency domain data. Algorithms were developed to match pauses in the audio to the ends of sentences in the transcript based on the observation that pause durations are greater at ends of sentences than within sentences. We have observed that ZCR peaks correspond to consonants in speech and that continuous wavelet transforms (CWT) work well for distinguishing between groups of consonants. These measurements will be used to develop algorithms to match selected phonemes in the audio to text in the transcript to supplement the pause matching results.
- 1.Canadian Network for Inclusive Cultural Exchange: Online enhanced captioning guidelines (2004), http://cnice.utoronto.ca/guidelines/caption.pdf (retrieved September 22, 2004)
- 3.Campbell, W.N.: Syllable-based segmental duration. In: Bailly, G., Benoit, C., Sawallis, T.R. (eds.) Talking machines: Theories, models, and designs, pp. 211–224. Elsevier Science, Amsterdam (1992)Google Scholar
- 5.Tan, B.T., Fu, M., Spray, A., Dermody, P.: The Use of Wavelet Transforms in Phoneme Recognition. In: Proceedings of ICSLP 1996 (1996)Google Scholar