Multi-label Ferns for Efficient Recognition of Musical Instruments in Recordings

  • Miron B. Kursa
  • Alicja A. Wieczorkowska
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8502)


In this paper we introduce multi-label ferns, and apply this technique for automatic classification of musical instruments in audio recordings. We compare the performance of our proposed method to a set of binary random ferns, using jazz recordings as input data. Our main result is obtaining much faster classification and higher F-score. We also achieve substantial reduction of the model size.


Random Forest Musical Instrument Audio Data Music Information Retrieval Prediction Speed 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Bosch, J.J., Janer, J., Fuhrmann, F., Herrera, P.: A Comparison of Sound Segregation Techniques for Predominant Instrument Recognition in Musical Audio Signals. In: 13th International Society for Music Information Retrieval Conference (ISMIR), pp. 559–564 (2012)Google Scholar
  2. 2.
    Breiman, L.: Random Forests. Machine Learning 45, 5–32 (2001)CrossRefzbMATHGoogle Scholar
  3. 3.
    Cont, A., Dubnov, S., Wessel, D.: Realtime multiple-pitch and multiple-instrument recognition for music signals using sparse non-negativity constraints. In: Proc. 10th Int. Conf. Digital Audio Effects (DAFx-2007), pp. 85–92 (2007)Google Scholar
  4. 4.
    Eggink, J., Brown, G.J.: Application of missing feature theory to the recognition of musical instruments in polyphonic audio. In: 4th International Society for Music Information Retrieval Conference, ISMIR (2003)Google Scholar
  5. 5.
    Essid, S., Richard, G., David, B.: Instrument recognition in polyphonic music based on automatic taxonomies. IEEE Trans. Audio, Speech, Lang. Process. 14(1), 68–80 (2006)CrossRefGoogle Scholar
  6. 6.
    Goto, M., Hashiguchi, H., Nishimura, T., Oka, R.: RWC Music Database: Music Genre Database and Musical Instrument Sound Database. In: 4th International Society for Music Information Retrieval Conference (ISMIR), pp. 229–230 (2003)Google Scholar
  7. 7.
    Heittola, T., Klapuri, A., Virtanen, A.: Musical Instrument Recognition in Polyphonic Audio Using Source-Filter Model for Sound Separation. In: 10th International Society for Music Information Retrieval Conference, ISMIR (2009)Google Scholar
  8. 8.
    Herrera-Boyer, P., Klapuri, A., Davy, M.: Automatic Classification of Pitched Musical Instrument Sounds. In: Klapuri, A., Davy, M. (eds.) Signal Processing Methods for Music Transcription. Springer (2006)Google Scholar
  9. 9.
    ISO: MPEG-7 Overview,
  10. 10.
    Jiang, W., Wieczorkowska, A., Raś, Z.W.: Music Instrument Estimation in Polyphonic Sound Based on Short-Term Spectrum Match. In: Hassanien, A.-E., Abraham, A., Herrera, F. (eds.) Foundations of Computational Intelligence Volume 2. SCI, vol. 202, pp. 259–273. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  11. 11.
    Kashino, K., Murase, H.: A sound source identification system for ensemble music based on template adaptation and music stream extraction. Speech Commun. 27, 337–349 (1999)CrossRefGoogle Scholar
  12. 12.
    Kitahara, T., Goto, M., Komatani, K., Ogata, T., Okuno, H.G.: Instrument identification in polyphonic music: Feature weighting to minimize influence of sound overlaps. EURASIP J. Appl. Signal Process. 2007, 1–15 (2007)CrossRefGoogle Scholar
  13. 13.
    Kubera, E.z., Kursa, M.B., Rudnicki, W.R., Rudnicki, R., Wieczorkowska, A.A.: All That Jazz in the Random Forest. In: Kryszkiewicz, M., Rybinski, H., Skowron, A., Raś, Z.W. (eds.) ISMIS 2011. LNCS, vol. 6804, pp. 543–553. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  14. 14.
    Kursa, M.B.: Random ferns method implementation for the general-purpose machine learning (2012), (submitted)
  15. 15.
    Kursa, M.B.: Robustness of Random Forest-based gene selection methods. BMC Bioinformatics 15(8(1)), 1–8 (2014)Google Scholar
  16. 16.
    Little, D., Pardo, B.: Learning Musical Instruments from Mixtures of Audio with Weak Labels. In: 9th International Society for Music Information Retrieval Conference, ISMIR (2008)Google Scholar
  17. 17.
    Martins, L.G., Burred, J.J., Tzanetakis, G., Lagrange, M.: Polyphonic instrument recognition using spectral clustering. In: 8th International Society for Music Information Retrieval Conference, ISMIR (2007)Google Scholar
  18. 18.
    MIDOMI: Search for Music Using Your Voice by Singing or Humming,
  19. 19.
    Niewiadomy, D., Pelikant, A.: Implementation of MFCC vector generation in classification context. J. Applied Computer Science 16(2), 55–65 (2008)Google Scholar
  20. 20.
    Opolko, F., Wapnick, J.: MUMS — McGill University Master Samples. CD’s (1987)Google Scholar
  21. 21.
    Özuysal, M., Fua, P., Lepetit, V.: Fast Keypoint Recognition in Ten Lines of Code. In: 2007 IEEE Conference on Computer Vision and Pattern Recognition. IEEE (2007)Google Scholar
  22. 22.
    Özuysal, M., Calonder, M., Lepetit, V., Fua, P.: Fast Keypoint Recognition using Random Ferns. Image Processing (2008)Google Scholar
  23. 23.
    Ras, Z.W., Wieczorkowska, A.A. (eds.): Advances in Music Information Retrieval. SCI, vol. 274. Springer, Heidelberg (2010)Google Scholar
  24. 24.
    Rudnicki, R.: Jazz band. Recording and mixing. Arrangements by M. Postle. Clarinet — J. Murgatroyd, trumpet — M. Postle, harmonica, trombone — N. Noutch, sousaphone – J. M. Lancaster (2010)Google Scholar
  25. 25.
    Shazam Entertainment Ltd,
  26. 26.
    Shen, J., Shepherd, J., Cui, B., Liu, L. (eds.): Intelligent Music Information Systems: Tools and Methodologies. Information Science Reference, Hershey (2008)Google Scholar
  27. 27.
    The University of IOWA Electronic Music Studios: Musical Instrument Samples,
  28. 28.
  29. 29.
    Vincent, E., Rodet, X.: Music transcription with ISA and HMM. In: 5th International Conference on Independent Component Analysis and Blind Signal Separation (ICA), pp. 1197–1204 (2004)Google Scholar
  30. 30.
    Wieczorkowska, A.A., Kursa, M.B.: A Comparison of Random Forests and Ferns on Recognition of Instruments in Jazz Recordings. In: Chen, L., Felfernig, A., Liu, J., Raś, Z.W. (eds.) ISMIS 2012. LNCS (LNAI), vol. 7661, pp. 208–217. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  31. 31.
    Barbedo, J.G.A., Tzanetakis, G.: Musical Instrument Classification Using Individual Partials. IEEE Transactions on Audio, Speech & Language Processing 19(1), 111–122 (2011)CrossRefGoogle Scholar
  32. 32.
    Kirchhoff, H., Dixon, S., Klapuri, A.: Multi-Template Shift-Variant Non-Negative Matrix Deconvolution for Semi-Automatic Music Transcription. In: 13th International Society for Music Information Retrieval Conference (ISMIR), pp. 415–420 (2012)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Miron B. Kursa
    • 1
  • Alicja A. Wieczorkowska
    • 2
  1. 1.Interdisciplinary Centre for Mathematical and Computational Modelling (ICM)University of WarsawWarsawPoland
  2. 2.Polish-Japanese Institute of Information TechnologyWarsawPoland

Personalised recommendations