International Journal of Speech Technology

, Volume 22, Issue 4, pp 1099–1113 | Cite as

Acoustic-phonetic feature based Kannada dialect identification from vowel sounds

  • Nagaratna B. ChittaragiEmail author
  • Shashidhar G. Koolagudi


In this paper, a dialect identification system is proposed for Kannada language using vowels sounds. Dialectal cues are characterized through acoustic parameters such as formant frequencies (F1–F3), and prosodic features [energy, pitch (F0), and duration]. For this purpose, a vowel dataset is collected from native speakers of Kannada belonging to different dialectal regions. Global features representing frame level global statistics such as mean, minimum, maximum, standard deviation and variance are extracted from vowel sounds. Local features representing temporal dynamic properties from the contour level are derived from the steady-state vowel region. Three decision tree-based ensemble algorithms, namely random forest, extreme random forest (ERF) and extreme gradient boosting algorithms are used for classification. Performance of both global and local features is evaluated individually. Further, the significance of every feature in dialect discrimination is analyzed using single factor-ANOVA (analysis of variances) tests. Global features with ERF ensemble model has shown a better average dialect identification performance of around 76%. Also, the contribution of every feature in dialect identification is verified. The role of duration, energy, pitch, and three formant features is found to be evidential in Kannada dialect classification.


Kannada dialect identification Formant frequencies Prosodic features Global and local features Ensemble algorithms 



  1. Adank, P., Van Hout, R., & Smits, R. (2004). An acoustic description of the vowels of Northern and Southern Standard Dutch. The Journal of the Acoustical society of America, 116(3), 1729–1738.CrossRefGoogle Scholar
  2. Agrawal, S. S., Jain, A., & Sinha, S. (2016). Analysis and modeling of acoustic information for automatic dialect classification. International Journal of Speech Technology, 19(3), 593–609.CrossRefGoogle Scholar
  3. Ajmera, J., McCowan, I., & Bourlard, H. (2003). Speech/music segmentation using entropy and dynamism features in a hmm classification framework. Speech Communication, 40(3), 351–363.CrossRefGoogle Scholar
  4. Arslan, L. M., & Hansen, J. H. L. (1996). Language accent classification in American English. Speech Communication, 18(4), 353–367.CrossRefGoogle Scholar
  5. Behravan, H., Hautamäki, V., & Kinnunen, T. (2015). Factors affecting i-vector based foreign accent recognition: A case study in spoken Finnish. Speech Communication, 66, 118–129.CrossRefGoogle Scholar
  6. Biadsy, F., Hirschberg, J., & Ellis, D. P. W. (2011). Dialect and accent recognition using phonetic-segmentation supervectors. In Twelfth annual conference of the international speech communication association.Google Scholar
  7. Biadsy, F., Hirschberg, J., & Habash, N. (2009). Spoken Arabic dialect identification using phonotactic modeling. In Proceedings of the workshop on computational approaches to semitic languages conducted by Association for Computational Linguistics (pp. 53–61).Google Scholar
  8. Biadsy, F., & Hirschberg, J. (2009). Using prosody and phonotactics in arabic dialect identification. INTERSPEECH, 9, 208–211.Google Scholar
  9. Boersma, P., Weenink, D., & Petrus, G. (2002). Praat, a system for doing phonetics by computer. Glot International, 5, 341–345.Google Scholar
  10. Breiman, L. (2001). Random forests. Machine Learning, 45(1), 5–32.CrossRefGoogle Scholar
  11. Brown, G. (2015). Moving towards automatic accent recognition for forensic applications. Interspeech Doctoral Consortium.Google Scholar
  12. Chambers, J. K., & Trudgill, P. (1998). Dialectology (2nd ed.). Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  13. Chen, T., & Guestrin, C. (2016). Xgboost: A scalable tree boosting system. In Proceedings of the 22nd international conference on knowledge discovery and data mining (pp. 785–794).Google Scholar
  14. Chen, T., Huang, C., Chang, E., & Wang, J. (2001). Automatic accent identification using Gaussian mixture models. In IEEE workshop on automatic speech recognition and understanding (pp. 343–346).Google Scholar
  15. Chen, N. F, Shen, W., & Campbell, J. P. (2010). A linguistically-informative approach to dialect recognition using dialect-discriminating context-dependent phonetic models. In IEEE international conference on acoustics speech and signal processing (ICASSP) (pp. 5014–5017)Google Scholar
  16. Chen, N. F., Tam, S. W., Shen, W., & Campbell, J. P. (2014). Characterizing phonetic transformations and acoustic differences across English dialects. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 22(1), 110–124.CrossRefGoogle Scholar
  17. Chittaragi, N. B., & Koolagudi, S. G. (2017). Acoustic features based word level dialect classification using SVM and ensemble methods. In Tenth international conference on contemporary computing (IC3) (pp. 1–6).Google Scholar
  18. Chittaragi, N. B, Limaye, A., Chandana, N. T., Annappa, B., & Koolagudi, S. G. (2019). Automatic text-independent kannada dialect identification system. In Information Systems Design and Intelligent Applications (pp. 79–87). Springer, Berlin.Google Scholar
  19. Chittaragi, N. B., Prakash, A., & Koolagudi, S. G. (2018). Dialect identification using spectral and prosodic features on single and ensemble classifiers. Arabian Journal for Science and Engineering, 43(8), 4289–4302.CrossRefGoogle Scholar
  20. Clopper, C. G., Pisoni, D. B., & De Jong, K. (2005). Acoustic characteristics of the vowel systems of six regional varieties of American English. The Journal of the Acoustical Society of America, 118(3), 1661–1676.CrossRefGoogle Scholar
  21. Darwish, K., Sajjad, H., & Mubarak, H. (2014). Verifiably effective Arabic dialect identification. In Empirical methods in natural language processing (pp. 1465–1468).Google Scholar
  22. Dehak, N., Torres-Carrasquillo, P. A., Reynolds, D. A., & Dehak, R. (2011). Language recognition via i-vectors and dimensionality reduction. In Interspeech (pp. 857–860).Google Scholar
  23. Dietterich, T. G. (2000). Ensemble methods in machine learning. In International workshop on multiple classifier systems (pp. 1–15). Springer, Berlin.Google Scholar
  24. Escudero, P., Boersma, P., Rauber, A. S., & Bion, R. A. H. (2009). A cross-dialect acoustic description of vowels: Brazilian and european portuguese. The Journal of the Acoustical Society of America, 126(3), 1379–1393.CrossRefGoogle Scholar
  25. Fogerty, D., & Humes, L. E. (2012). The role of vowel and consonant fundamental frequency, envelope, and temporal fine structure cues to the intelligibility of words and sentences. The Journal of the Acoustical Society of America, 131(2), 1490–1501.CrossRefGoogle Scholar
  26. Freund, Y., & Schapire, R. (1999). A short introduction to boosting. Journal-Japanese Society For Artificial Intelligence, 14(1612), 771–780.Google Scholar
  27. Friedman, J. H. (2001). Greedy function approximation: A gradient boosting machine. Annals of Statistics, 29, 1189–1232.MathSciNetCrossRefGoogle Scholar
  28. Geurts, P., Ernst, D., & Wehenkel, L. (2006). Extremely randomized trees. Machine Learning, 63(1), 3–42.CrossRefGoogle Scholar
  29. Giannakopoulos, T., & Pikrakis, A. (2014). Introduction to audio analysis: A MATLAB approach. Orlando: Academic Press.Google Scholar
  30. Hansen, J. H. L., & Liu, G. (2016). Unsupervised accent classification for deep data fusion of accent and language information. Speech Communication, 78, 19–33.CrossRefGoogle Scholar
  31. Harris, M. J., Gries, S. T., & Miglio, V. G. (2014). Prosody and its application to forensic linguistics. LESLI: Linguistic Evidence in Security Law and Intelligence, 2(2), 11–29.CrossRefGoogle Scholar
  32. Hillenbrand, J. M., Clark, M. J., & Nearey, T. M. (2001). Effects of consonant environment on vowel formant patterns. The Journal of the Acoustical Society of America, 109(2), 748–763.CrossRefGoogle Scholar
  33. Huang, R., Hansen, J. H. L., & Angkititrakul, P. (2007). Dialect/accent classification using unrestricted audio. IEEE Transactions on Audio, Speech, and Language Processing, 15(2), 453–464.CrossRefGoogle Scholar
  34. Jain, D., & Cardona, G. (2007). The Indo-Aryan languages. London: Routledge.Google Scholar
  35. Johnson, K. (2008). 15 speaker normalization in speech perception. In: The handbook of speech perception (p. 363).Google Scholar
  36. Li, H., Ma, B., & Lee, K. A. (2013). Spoken language recognition: From fundamentals to practice. Proceedings of the IEEE, 101(5), 1136–1159.CrossRefGoogle Scholar
  37. Liu, G. A., & Hansen, J. H. L. (2011). A systematic strategy for robust automatic dialect identification. In IEEE nineteenth European signal processing conference (pp. 2138–2141).Google Scholar
  38. McCandless, S. (1974). An algorithm for automatic formant extraction using linear prediction spectra. IEEE Transactions on Acoustics, Speech, and Signal Processing, 22(2), 135–141.CrossRefGoogle Scholar
  39. Mehrabani, M., & Hansen, J. H. L. (2015). Automatic analysis of dialect/language sets. International Journal of Speech Technology, 18(3), 277–286.CrossRefGoogle Scholar
  40. Nagesha, K. S., & Nagabhushana, B. (2007). Acoustic-phonetic analysis of Kannada accents. In Proceedings of frontiers of research on speech and music signal processing, AIISH (pp. 222–225).Google Scholar
  41. Najafian, M., DeMarco, A., Cox, S., & Russell, M. (2014) . Unsupervised model selection for recognition of regional accented speech. In Fifteenth annual conference of the international speech communication association.Google Scholar
  42. Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., et al. (2011). Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12, 2825–2830.MathSciNetzbMATHGoogle Scholar
  43. Prasanna, S. R. M., Reddy, B. V. S., & Krishnamoorthy, P. (2009). Vowel onset point detection using source, spectral peaks, and modulation spectrum energies. IEEE Transactions on Audio, Speech, and Language Processing, 17(4), 556–565.CrossRefGoogle Scholar
  44. Rabiner, L. R., & Juang, B.-H. (1993). Fundamentals of speech recognition (Vol. 14). Hall Englewood Cliffs: PTR Prentice.Google Scholar
  45. Rajapurohit, B. B. (1982). Acoustic characteristics of Kannada. Mysore: Central Institute of Indian Languages.Google Scholar
  46. Rao, K. S., & Koolagudi, S. G. (2011). Identification of Hindi dialects and emotions using spectral and prosodic features of speech. International Journal of Systemics, Cybernetics and Informatics, 9(4), 24–33.Google Scholar
  47. Reddy, V. R., Maity, S., & Rao, K. S. (2013). Identification of Indian languages using multi-level spectral and prosodic features. International Journal of Speech Technology, 16(4), 489–511.CrossRefGoogle Scholar
  48. Reetz, H., & Jongman, A. (2011). Phonetics: Transcription, production, acoustics, and perception (Vol. 34). New York: Wiley.Google Scholar
  49. Rouas, J. L. (2007). Automatic prosodic variations modeling for language and dialect discrimination. IEEE Transactions on Audio, Speech and Language Processing, 15(6), 1904–1911.CrossRefGoogle Scholar
  50. Sarma, M., & Sarma, K. K. (2016). Dialect Identification from Assamese speech using prosodic features and a neuro fuzzy classifier. In Third international conference on signal processing and integrated networks (SPIN) (pp. 127–132).Google Scholar
  51. Shridhara, M. V., Banahatti, B. K., Narthan, L., Karjigi, V., & Kumaraswamy, R. (2013). Development of kannada speech corpus for prosodically guided phonetic search engine. In International conference on Asian spoken language research and evaluation (O-COCOSDA/CASLRE) (pp. 1–6). IEEE.Google Scholar
  52. Sinha, S., Jain, A., & Agrawal, S. S. (2015). Fusion of multi-stream speech features for dialect classification. CSI Transactions on ICT, 2(4), 243–252.CrossRefGoogle Scholar
  53. Soorajkumar, R, Girish, G. N., Ramteke, P. B., Joshi, S. S., & Koolagudi, S. G. (2017). Text-independent automatic accent identification system for Kannada language. In Proceedings of the international conference on data engineering and communication technology (pp. 411–418). Springer, Berlin.Google Scholar
  54. Sun, X. (2000). A pitch determination algorithm based on subharmonic-to-harmonic ratio. In The sixth international conference of spoken language processing (pp. 676–679).Google Scholar
  55. Themistocleous, C. (2017). Dialect classification using vowel acoustic parameters. Speech Communication, 92, 13–22.CrossRefGoogle Scholar
  56. Ximenes, A. B., Shaw, J. A., & Carignan, C. (2017). A comparison of acoustic and articulatory methods for analyzing vowel differences across dialects: Data from American and Australian English. The Journal of the Acoustical Society of America, 142(1), 363–377.CrossRefGoogle Scholar
  57. Zheng, D. C., Dyke, D., Berryman, F., & Morgan, C. (2012). A new approach to acoustic analysis of two British regional accents Birmingham and Liverpool accents. International Journal of Speech Technology, 15(2), 77–85.CrossRefGoogle Scholar
  58. Zhenhao, G. (2015). Improved accent classification combining phonetic vowels with acoustic features. In 8th international congress on image and signal processing (CISP) (pp. 1204–1209).Google Scholar
  59. Zissman, M. A., Gleason, T. P., Rekart, D. M., & Losiewicz, B. L. (1996). Automatic dialect identification of extemporaneous conversational, Latin American Spanish speech. Acoustics, Speech, and Signal Processing, ICASSP, 2, 777–780.Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of Computer Science and EngineeringNational Institute of Technology KarnatakaSurathkalIndia
  2. 2.Department of Information Science and EngineeringSiddaganga Institute of TechnologyTumkurIndia

Personalised recommendations