Advertisement

Journal of Intelligent Information Systems

, Volume 34, Issue 3, pp 275–303 | Cite as

Identification of a dominating instrument in polytimbral same-pitch mixes using SVM classifiers with non-linear kernel

  • Alicja A. WieczorkowskaEmail author
  • Elżbieta Kubera
Article

Abstract

In this paper we deal with the problem of identification of the dominating musical instrument in a recording containing simultaneous sounds of the same pitch. Sustained harmonic sounds from one octave of twelve instruments were considered. The training data set contains isolated sounds of two forms, one from selected musical instruments, and the other from the same mixed with artificial harmonic and noise sounds of lower amplitude. The test data set contains mixes of musical instrument sounds. A Support Vector Machine classifier was used for training and testing experiments, using a non-linear kernel. Additionally, we performed tests on data based on different recordings of instruments than those used in the training procedure described above. Results of these experiments are presented and discussed.

Keywords

Music information retrieval Instrument sound recognition 

Notes

Acknowledgements

This work was supported by the National Science Foundation under grant IIS-0414815, and also by the Research Center of PJIIT, supported by the Polish National Committee for Scientific Research (KBN).

The authors would like to express thanks to Xin Zhang from the University of North Carolina at Charlotte for her help with data parameterization. We are also grateful to Zbigniew W. Raś from UNC-Charlotte for fruitful discussions. Special thanks to Jianhua Chen from Louisiana State University and to Ana Carolina Lorena from Universidade Federal do ABC, Brasil, for kind comments regarding support vector machines. Our sincere thanks go to Alan Barton from National Research Council Canada for fruitful consultations, and to anonymous ISMIS and JIIS reviewers whose detailed comments have significantly helped us shaping this paper.

References

  1. Adobe Systems Incorporated: Adobe Audition 1.0 (2003).Google Scholar
  2. Aniola, P., & Lukasik, E. (2007). JAVA library for automatic musical instruments recognition. AES 122 Convention, Vienna, Austria.Google Scholar
  3. Brown, J. C. (1999). Computer identification of musical instruments using pattern recognition with cepstral coefficients as features. Journal of the Acoustical Society of America, 105, 1933–1941.CrossRefGoogle Scholar
  4. Cosi, P., De Poli, G., & Lauzzana, G. (1994). Auditory modelling and self-organizing neural networks for timbre classification. Journal of New Music Research, 23, 71–98.CrossRefGoogle Scholar
  5. Chang, C.-C., & Lin, C.-J. (2001). LIBSVM: a library for support vector machines. http://www.csie.ntu.edu.tw/~cjlin/libsvm.
  6. Dziubinski, M., Dalka, P., & Kostek, B. (2005). Estimation of musical sound separation algorithm effectiveness employing neural networks. Journal of Intelligent Information Systems, 24(2–3), 133–157.CrossRefGoogle Scholar
  7. Fan, R.-E., Chen, P.-H., & Lin, C.-J. (2005). Working set selection using second order information for training SVM. Journal of Machine Learning Research, 6, 1889–1918.MathSciNetGoogle Scholar
  8. Fujinaga, I., & McMillan, K. (2000). Realtime recognition of orchestral instruments. In Proceedings of the international computer music conference (pp. 141–143).Google Scholar
  9. Goto, M., Hashiguchi, H., Nishimura, T., & Oka, R. (2002). RWC music database: Music genre database and musical instrument sound database. In 3rd international conference on music information retrieval ISMIR; see also http://staff.aist.go.jp/m.goto/RWC-MDB/.
  10. Herrera, P., Amatriain, X., Batlle, E., & Serra, X. (2000). Towards instrument segmentation for music content description: A critical review of instrument classification techniques. In International symposium on music information retrieval ISMIR.Google Scholar
  11. Hsu, C.-W., Chang, C.-C., & Lin, C.-J. (2008). A practical guide to support vector classification. http://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf.
  12. Ifeachor, E. C., & Jervis, B. W. (2002). Digital signal processing: A practical approach (2nd ed.). Prentice Hall: Englewood Cliffs.Google Scholar
  13. ISO/IEC JTC1/SC29/WG11 (2004). MPEG-7 overview. Available at http://www.chiariglione.org/mpeg/standards/mpeg-7/mpeg-7.htm.
  14. Kaminskyj, I. (2002). Multi-feature musical instrument sound classifier w/user determined generalisation performance. In Proceedings of the Australasian computer music association conference ACMC (pp. 53–62).Google Scholar
  15. Kitahara, T., Goto, M., & Okuno, H. G. (2005). Pitch-dependent identification of musical instrument sounds. Applied Intelligence, 23, 267–275.CrossRefGoogle Scholar
  16. Klapuri, A. (2004). Signal processing methods for the automatic transcription of music. Ph.D. thesis, Tampere University of Technology, Finland.Google Scholar
  17. Kursa, M., Rudnicki, W., Wieczorkowska, A., Kubera, E., & Kubik-Komar, A. (2009). Musical instruments in random forest. In J. Rauch et al. (Eds.), ISMIS 2009, LNAI 5722 (pp. 281 – 290). Berlin Heidelberg: Springer-VerlagGoogle Scholar
  18. Livshin, A., & Rodet, X. (2003). The importance of cross database evaluation in musical instrument sound classification: A critical approach. In International symposium on music information retrieval ISMIR.Google Scholar
  19. Manjunath, B. S., Salembier, P., & Sikora, T. (2002). Introduction to MPEG-7. Multimedia content description interface. New York: Wiley.Google Scholar
  20. Martin, K. D., & Kim, Y. E. (1998). Musical instrument identification: A pattern-recognition approach. In 136-th meeting of the acoustical society of America. Norfolk, VA.Google Scholar
  21. Opolko, F., & Wapnick, J. (1987). MUMS - McGill University Master Samples. CD’s.Google Scholar
  22. Peeters, G., McAdams, S., & Herrera, P. (2000). Instrument sound description in the context of MPEG-7. In: International computer music conference ICMC’2000.Google Scholar
  23. Platt, J. C. (1998). Sequential minimal optimization: A fast algorithm for training support vector machines. Microsoft Research, Technical Report MSR-TR-98-14.Google Scholar
  24. The University of IOWA Electronic Music Studios (2009). Musical instrument samples, http://theremin.music.uiowa.edu/MIS.html.
  25. The University of Waikato (2009). Weka machine learning project. http://www.cs.waikato.ac.nz/~ml/.
  26. Wieczorkowska, A. (1999). Rough sets as a tool for audio signal classification. In Z. Ras & A. Skowron (Eds.), Found.Intel.Systems (pp. 367–375). LNCS/LNAI 1609. New York: Springer.CrossRefGoogle Scholar
  27. Wieczorkowska, A. (2000). Towards musical data classification via wavelet analysis. In Z. W. Ras & S. Ohsuga (Eds.), Foundations of intelligent systems. Proc. ISMIS’00. LNCS/LNAI (Vol. 1932, pp. 292–300). Charlotte: Springer.CrossRefGoogle Scholar
  28. Wieczorkowska, A. (2008). Learning from soft-computing methods on abnormalities in audio data. In C.-C. Chan, J. W. Grzymala-Busse, & W. P. Ziarko (Eds.), Rough sets and current trends in computing. 6th international conference, RSCTC 2008. Proceedings. LNAI (Vol. 5306, pp. 465–474). Berlin: Springer.CrossRefGoogle Scholar
  29. Wieczorkowska, A., & Kolczynska, E. (2008). Identification of dominating instrument in mixes of sounds of the same pitch. In A. An, S. Matwin, Z. W. Ras, & D. Slezak (Eds.), Foundations of intelligent systems. 17th international symposium, ISMIS 2008. LNAI (Vol. 4994, pp. 455–464). Berlin: Springer.Google Scholar
  30. Wieczorkowska, A., Kolczyńska, E., & Raś, Z. W. (2008). Training of classifiers for the recognition of musical instrument dominating in the same-pitch mix. In N. T. Nguyen & R. Katarzyniak (Eds.), New challenges in applied intelligence technologies. Studies in computational intelligence (Vol. 134, pp. 213–222). New York: Springer.CrossRefGoogle Scholar
  31. Wieczorkowska, A., & Kubik-Komar, A. (2009). Application of discriminant analysis to distinction of musical instruments on the basis of selected sound parameters. In K. A. Cyran (Ed.), Man-machine interactions. AISC (Vol. 59, pp. 407–416). Berlin: Springer.CrossRefGoogle Scholar
  32. Zhang, X. (2007). Cooperative music retrieval based on automatic indexing of music by instruments and their types. Ph.D thesis, Univ. North Carolina, Charlotte.Google Scholar
  33. Zhang, X, Marasek, K., & Ras, Z.W. (2007). Maximum likelihood study for sound pattern separation and recognition. In 2007 international conference on multimedia and ubiquitous engineering MUE 2007, IEEE (pp. 807–812).Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  1. 1.Polish–Japanese Institute of Information TechnologyWarsawPoland
  2. 2.University of Life Sciences in LublinLublinPoland

Personalised recommendations