Phonetic Segmentation Using Knowledge from Visual and Perceptual Domain

  • Bhavik Vachhani
  • Chitralekha BhatEmail author
  • Sunil Kopparapu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10415)


Accurate and automatic phonetic segmentation is crucial for several speech based applications such as phone level articulation analysis and error detection, speech synthesis, annotation, speech recognition and emotion recognition. In this paper we examine the effectiveness of using visual features obtained by processing the image spectrogram of a speech utterance, as applied to phonetic segmentation. Further, we propose a mechanism to combine the knowledge from visual and perceptual domains for automatic phonetic segmentation. This process can be considered analogous to manual phonetic segmentation. The technique was evaluated on TIMIT American English Corpus. Experimental results show significant improvements in phonetic segmentation, especially for lower tolerances of 5, 10 and 15 ms, with an absolute improvement of 8.29% for TIMIT database for a 10 ms tolerance is observed.


Unsupervised phonetic segmentation Edge detection Multi-taper Visual phonetic segmentation 


  1. 1.
    Adell, J., Bonafonte, A.: Towards phone segmentation for concatenative speech synthesis. In: Proceedings of the 5th ISCA Speech Snthesis Workshop, pp. 139–144 (2004)Google Scholar
  2. 2.
    Dusan, S., Rabiner, L.R.: On the relation between maximum spectral transition positions and phone boundaries. In: INTERSPEECH- ICSLP, Ninth International Conference on Spoken Language Processing, 17–21 September 2006, Pittsburgh, PA, USA (2006)Google Scholar
  3. 3.
    Garofolo, J.S.: Getting started with the darpa timit cd-rom: an acoustic phonetic continuous speech database. In: National Institute of Standards and Technology (NIST) (1988)Google Scholar
  4. 4.
    Golipour, L., O’Shaughnessy, D.D.: A new approach for phoneme segmentation of speech signals. In: INTERSPEECH, pp. 1933–1936. ISCA (2007)Google Scholar
  5. 5.
    Kalinli, O.: Automatic phoneme segmentation using auditory attention features. In: Proceedings of the INTERSPEECH, pp. 2270–2273 (2012)Google Scholar
  6. 6.
    Keshet, J., Shalev-Shwartz, S., Singer, Y., Chazan, D.: Phoneme alignment based on discriminative learning. In: INTERSPEECH 2005, pp. 2961–2964 (2005)Google Scholar
  7. 7.
    King, S., Hasegawa-Johnson, M.: Accurate speech segmentation by mimicking human auditory processing. In: IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2013, 26–31 May, Vancouver, BC, Canada, pp. 8096–8100 (2013)Google Scholar
  8. 8.
    Leow, S.J., Chng, E.S., Lee, C.H.: Language-resource independent speech segmentation using cues from a spectrogram image. In: 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5813–5817, April 2015Google Scholar
  9. 9.
    Lo, H.Y., Wang, H.M.: Phonetic boundary refinement using support vector machine. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), vol. 4, pp. 933–936, April 2007Google Scholar
  10. 10.
    Patil, V., Joshi, S., Rao, P.: Improving the robustness of phonetic segmentation to accent and style variation with a two-staged approach. In: INTERSPEECH, pp. 2543–2546 (2009)Google Scholar
  11. 11.
    Pitt, M.A., Johnson, K., Hume, E., Kiesling, S., Raymond, W.: The buckeye corpus of conversational speech: labeling conventions and a test of transcriber reliability. Speech Commun. 45, 89–95 (2005)CrossRefGoogle Scholar
  12. 12.
    Prieto, G.A., Parker, R.L., Thomson, D.J., Vernon, F.L., Graham, R.L.: Reducing the Bias of Multitaper Spectrum Estimates, vol. 171, pp. 1269–1281. Oxford University Press, Oxford (2007)Google Scholar
  13. 13.
    Qiao, Y., Shimomura, N., Minematsu, N.: Unsupervised optimal phoneme segmentation: objectives, algorithm and comparisons. In: ICASSP, pp. 3989–3992 (2008)Google Scholar
  14. 14.
    Raymond, W.D., Pitt, M.A., Johnson, K., Hume, E., Makashay, M.J., Dautricourt, R., Hilts, C.: An analysis of transcription consistency in spontaneous speech from the buckeye corpus. In: INTERSPEECH (2002)Google Scholar
  15. 15.
    Shah, N.J., Vachhani, B.B., Sailor, H.B., Patil, H.A.: Effectiveness of PLP-based phonetic segmentation for speech synthesis. In: Proceedings of the ICASSP, Florence, Italy, pp. 270–274 (2014)Google Scholar
  16. 16.
    Stolcke, A., Ryant, N., Mitra, V., Yuan, J., Wang, W., Liberman, M.: Highly accurate phonetic segmentation using boundary correction models and system fusion. In: Proceedings of the ICASSP, Florence, Italy, pp. 5552–5556 (2014)Google Scholar
  17. 17.
    Thomson, D.: Spectrum estimation and harmonic analysis. Proc. IEEE 70, 1055–1096 (1982)CrossRefGoogle Scholar
  18. 18.
    Vachhani, B., Bhat, C., Kopparapu, S.: Robust phonetic segmentation using multi-taper spectral estimation for noisy and clipped speech. In: 2016 24th European Signal Processing Conference (EUSIPCO), pp. 1343–1347, August 2016Google Scholar
  19. 19.
    Wesenick, M.B., Kipp, A.: Estimating the quality of phonetic transcriptions and segmentations of speech signals. In: Proceedings of the Fourth International Conference on Spoken Language 1996, ICSLP 1996, vol. 1, pp. 129–132. IEEE (1996)Google Scholar
  20. 20.
    Yuan, J., Ryant, N., Liberman, M., Stolcke, A., Mitra, V., Wang, W.: Automatic phonetic segmentation using boundary models. In: Proceedings of the INTERSPEECH, pp. 2306–2310 (2013)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Bhavik Vachhani
    • 1
  • Chitralekha Bhat
    • 1
    Email author
  • Sunil Kopparapu
    • 1
  1. 1.TCS Innovation LabsMumbaiIndia

Personalised recommendations