Advertisement

Influence of Feature Sets on Precision, Recall, and Accuracy of Identification of Musical Instruments in Audio Recordings

  • Elżbieta Kubera
  • Alicja A. Wieczorkowska
  • Magdalena Skrzypiec
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8502)

Abstract

In this paper we investigate how various feature sets influence precision, recall, and accuracy of identification of multiple instruments in polyphonic recordings. Our investigations were performed on classical music, and musical instruments typical for this music. Five feature sets were investigated. The results show that precision and recall change to a great extend, beyond the usual trade-off, whereas accuracy is relatively stable. Also, the results depend on the polyphony level of particular pieces of music. The investigated music varies in polyphony level, from 2-instrument duet (with piano) to symphonies.

Keywords

Random Forest Musical Instrument Classical Music Audio Data Music Information Retrieval 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Barbedo, J.G.A., Tzanetakis, G.: Musical Instrument Classification Using Individual Partials. IEEE Trans. Audio, Speech, Lang. Process. 19(1), 111–122 (2011)CrossRefGoogle Scholar
  2. 2.
    Bosch, J.J., Janer, J., Fuhrmann, F., Herrera, P.: A Comparison of Sound Segregation Techniques for Predominant Instrument Recognition in Musical Audio Signals. In: 13th Int. Society for Music Information Retrieval Conf., pp. 559–564 (2012)Google Scholar
  3. 3.
    Breiman, L.: Random Forests. Machine Learning 45, 5–32 (2001)CrossRefzbMATHGoogle Scholar
  4. 4.
    Cont, A., Dubnov, S., Wessel, D.: Realtime multiple-pitch and multiple-instrument recognition for music signals using sparse non-negativity constraints. In: 10th Int. Conf. Digital Audio Effects, pp. 85–92 (2007)Google Scholar
  5. 5.
    Eggink, J., Brown, G.J.: Application of missing feature theory to the recognition of musical instruments in polyphonic audio. In: 4th International Conference on Music Information Retrieval (2003)Google Scholar
  6. 6.
    Essid, S., Richard, G., David, B.: Instrument recognition in polyphonic music based on automatic taxonomies. IEEE Trans. Audio, Speech, Lang. Process 14(1), 68–80 (2006)CrossRefGoogle Scholar
  7. 7.
    Fuhrmann, F.: Automatic musical instrument recognition from polyphonic music audio signals. PhD Thesis. Universitat Pompeu Fabra (2012)Google Scholar
  8. 8.
    Goto, M., Hashiguchi, H., Nishimura, T., Oka, R.: RWC Music Database: Popular, Classical, and Jazz Music Databases. In: Proceedings of the 3rd International Conference on Music Information Retrieval, pp. 287–288 (2002)Google Scholar
  9. 9.
    Goto, M., Hashiguchi, H., Nishimura, T., Oka, R.: RWC Music Database: Music Genre Database and Musical Instrument Sound Database. In: 4th International Conference on Music Information Retrieval, pp. 229–230 (2003)Google Scholar
  10. 10.
    Heittola, T., Klapuri, A., Virtanen, A.: Musical Instrument Recognition in Polyphonic Audio Using Source-Filter Model for Sound Separation. In: 10th Int. Society for Music Information Retrieval Conf. (2009)Google Scholar
  11. 11.
    Herrera-Boyer, P., Klapuri, A., Davy, M.: Automatic Classification of Pitched Musical Instrument Sounds. In: Klapuri, A., Davy, M. (eds.) Signal Processing Methods for Music Transcription. Springer Science+Business Media LLC (2006)Google Scholar
  12. 12.
    ISO: MPEG-7 Overview, http://www.chiariglione.org/mpeg/
  13. 13.
    Jiang, W., Wieczorkowska, A., Raś, Z.W.: Music Instrument Estimation in Polyphonic Sound Based on Short-Term Spectrum Match. In: Hassanien, A.-E., Abraham, A., Herrera, F. (eds.) Foundations of Comput. Intel. Vol. 2. SCI, vol. 202, pp. 259–273. Springer, Heidelberg (2009)Google Scholar
  14. 14.
    Kashino, K., Murase, H.: A sound source identification system for ensemble music based on template adaptation and music stream extraction. Speech Commun. 27, 337–349 (1999)CrossRefGoogle Scholar
  15. 15.
    Kirchhoff, H., Dixon, S., Klapuri, A.: Multi-Template Shift-Variant Non-Negative Matrix Deconvolution for Semi-Automatic Music Transcription. In: 13th International Society for Music Information Retrieval Conference, pp. 415–420 (2012)Google Scholar
  16. 16.
    Kitahara, T., Goto, M., Komatani, K., Ogata, T., Okuno, H.G.: Instrument identification in polyphonic music: Feature weighting to minimize influence of sound overlaps. EURASIP J. Appl. Signal Process 2007, 1–15 (2007)CrossRefGoogle Scholar
  17. 17.
    Kubera, E.z., Kursa, M.B., Rudnicki, W.R., Rudnicki, R., Wieczorkowska, A.A.: All That Jazz in the Random Forest. In: Kryszkiewicz, M., Rybinski, H., Skowron, A., Raś, Z.W. (eds.) ISMIS 2011. LNCS (LNAI), vol. 6804, pp. 543–553. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  18. 18.
    Kubera, E., Wieczorkowska, A.: Mining Audio Data for Multiple Instrument Recognition in Classical Music. In: New Frontiers in Mining Complex Patterns NFMCP 2013, International Workshop, held at ECML-PKDD (2013)Google Scholar
  19. 19.
    Little, D., Pardo, B.: Learning Musical Instruments from Mixtures of Audio with Weak Labels. In: 9th International Conference on Music Information Retrieval (2008)Google Scholar
  20. 20.
    Martins, L.G., Burred, J.J., Tzanetakis, G., Lagrange, M.: Polyphonic instrument recognition using spectral clustering. In: 8th International Conference on Music Information Retrieval (2007)Google Scholar
  21. 21.
    Max-Planck-Institut Informatik: Chroma Toolbox: Pitch, Chroma, CENS, CRP, http://www.mpi-inf.mpg.de/resources/MIR/chromatoolbox/
  22. 22.
    Müller, M.: Information Retrieval for Music and Motion. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  23. 23.
    Niewiadomy, D., Pelikant, A.: Implementation of MFCC vector generation in classification context. J. Applied Computer Science 16(2), 55–65 (2008)Google Scholar
  24. 24.
    Opolko, F., Wapnick, J.: MUMS — McGill University Master Samples. CD’s (1987)Google Scholar
  25. 25.
    Subrahmanian, V.S.: Principles of Multimedia Database Systems. Morgan Kaufmann, San Francisco (1998)Google Scholar
  26. 26.
    The University of IOWA Electronic Music Studios: Musical Instrument Samples, http://theremin.music.uiowa.edu/MIS.html
  27. 27.
    Vincent, E., Rodet, X.: Music transcription with ISA and HMM. In: Puntonet, C.G., Prieto, A.G. (eds.) ICA 2004. LNCS, vol. 3195, pp. 1197–1204. Springer, Heidelberg (2004)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Elżbieta Kubera
    • 1
  • Alicja A. Wieczorkowska
    • 2
  • Magdalena Skrzypiec
    • 3
  1. 1.University of Life Sciences in LublinLublinPoland
  2. 2.Polish-Japanese Institute of Information TechnologyWarsawPoland
  3. 3.Maria Curie-Skłodowska University in LublinLublinPoland

Personalised recommendations