A Machine Learning Approach to Tongue Motion Analysis in 2D Ultrasound Image Sequences
Analysis of tongue motions as captured in dynamic ultrasound (US) images has been an important tool in speech research. Previous studies generally required semi-automatic tongue segmentations to perform data analysis. In this paper, we adopt a machine learning approach that does not require tongue segmentation. Specifically, we employ advanced normalization procedures to temporally register the US sequences using their corresponding audio files. To explicitly encode motion, we then register the image frames spatio-temporally to compute a set of deformation fields from which we construct the velocity-based and spatio-temporal gestural descriptors, where the latter explicitly encode tongue dynamics during speech. Next, making use of the recently proposed Histogram Intersection Kernel, we perform support vector machine classification to evaluate the extracted descriptors with a set of clinical measures. We applied our method to speech abnormality and tongue gestures prediction. Overall, differentiating tongue motion, as produced by patients with or without speech impediments on a dataset of 24 US sequences, was achieved with classification accuracy of 94%. When applied to another dataset of 90 US sequences for two other classification tasks, accuracies were 86% and 84%.
KeywordsSupport Vector Machine Dynamic Time Warping Tongue Motion Registration Result Temporal Alignment
Unable to display preview. Download preview PDF.
- 1.Bressmann, T.: Ultrasound imaging and its application in speech-language pathology and speech science. Amer. Speech-Language-Hearing Assoc. Newsl. 33(4), 204–211 (2007)Google Scholar
- 4.Kocjancic, T.: Ultrasound study of tongue movements in childhood apraxia of speech. In: Ultrafest V, pp. 1–2 (2010)Google Scholar
- 5.Stern, M., Hagan, J., Park, J., Traub, D.: The effects of age on tongue motion and speech duration. In: Ultrafest V, pp. 1–2 (2010)Google Scholar
- 6.Shadle, C.H., Iskarous, K., Proctor, M.I.: Use of ultrasound to study differences in the tongue dorsum of voiced vs. voiceless fricatives. In: Ultrafest V, pp. 1–2 (2010)Google Scholar
- 7.Tang, L., Hamarneh, G.: Graph-based tracking of the tongue contour in ultrasound sequences with adaptive temporal regularization. In: MMBIA, pp. 1–8 (2010)Google Scholar
- 8.Hueber, T., Aversano, G., Chollet, G., Denby, B., Dreyfus, G., Oussar, Y., Roussel, P., Stone, M.: Eigentongue feature extraction for an ultrasound-based silent speech interface. In: IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2007, vol. 1, pp. I–1245–I–1248 (2007)Google Scholar
- 9.Berry, J., Diana Archangeli, I.F.: Automatic classification of tongue gestures in ultrasound images. In: Laboratory Phonology, vol. 12 (2010)Google Scholar
- 11.Bird, S., Leonard, J., Moisik, S.: A motion vector analysis of tongue motion in SENCOFEN /qV/ and /Vq/ sequences. In: Ultrafest V, pp. 1–2 (2010)Google Scholar
- 12.Moisik, S.R.: Laryngeal Ultrasound Assessment of Retracted and Constricted Articulations by Phoneticians. In: Ultrafest V, pp. 1–2 (2010)Google Scholar
- 13.Turetsky, R., Ellis, D.: Ground-truth transcriptions of real music from force-aligned midi syntheses. In: 4th ISMIR, pp. 135–141 (2003)Google Scholar
- 15.Herold, B., Bressmann, T., Quintero, J., Hielscher-Fastabend, M., Stenneken, P., Irish, J.: Analysis of vowel-consonant-vowel sequences in patients with partial glossectomies using 2D ultrasound imaging. In: Ultrafest V, pp. 1–2 (2010)Google Scholar