Skip to main content
Log in

Automatic musical instrument classification using fractional fourier transform based- MFCC features and counter propagation neural network

  • Published:
Journal of Intelligent Information Systems Aims and scope Submit manuscript

Abstract

This paper presents a novel feature extraction scheme for automatic classification of musical instruments using Fractional Fourier Transform (FrFT)-based Mel Frequency Cepstral Coefficient (MFCC) features. The classifier model for the proposed system has been built using Counter Propagation Neural Network (CPNN). The discriminating capability of the proposed features have been maximized for between-class instruments and minimized for within-class instruments compared to other conventional features. Also, the proposed features show significant improvement in classification accuracy and robustness against Additive White Gaussian Noise (AWGN) compared to other conventional features. McGill University Master Sample (MUMS) sound database has been used to test the performance of the system.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  • Agostini, G., Longari, M., Poolastri, E. (2001). Content-based classification of musical instrument timbres. In international workshop on content-based Multimedia indexing.

  • Agostini, G., Longari, M., & Poolastri, E. (2003). Musical instrument timbres classification with spectral features. EURASIP Journal on Advances in Signal Processing, 2003(1), 5–14.

    Article  Google Scholar 

  • Bhalke, D. G., Rama Rao C.B., Bormane, D.S. (2013). Fractional fourier transform based features for musical instrument recognition using machine learning techniques. Proceedings of the international conference on frontiers of intelligent computing: theory and applications.

  • Bhalke, D. G., Rama Rao C.B., Bormane, D.S. (2014). Stringed instrument recognition using fractional fourier transform and linear discriminant analysis. International conference in issues and challenges in intelligent computing techniques, ICICT-2014.

  • Brown, J. C. (1999). Computer identification of musical instruments using pattern recognition with cepstral coefficients as features. Journal of Acoustics Society of America, 105, 1933–1941.

    Article  Google Scholar 

  • Burred, J. J., Röbel, A., & Sikora, T. (2010). Dynamic spectral envelope modeling for timbre analysis of musical instrument sounds. IEEE Trans Audio Speech Language Processing, 18(3), 663–674.

    Article  Google Scholar 

  • Buyens, W., Dijk, B. V., Wouters, J., Moonen, M. (2013). A harmonic/percussion sound separation based music pre-processing scheme for cochlear implant users. Proceedings of the 21st European signal processing conference (EUSIPCO), pp. 1–5.

  • Byun, H., and Lee,S.W. (2002). Applications of support vector machines for pattern recognition. In Proc. of the international workshop on pattern recognition with Support Vector Machine, pp. 213–236.

  • Candan, C., Kutay, M. A., & Ozaktas, H. M. (2000). The discrete fractional fourier transform. IEEE Trans Signal Processing, 48(5), 1329–37.

    Article  MathSciNet  MATH  Google Scholar 

  • Chandwadkar. D. M., Sutaone, M. S. (2012). Role of features and classifiers on accuracy of identification of musical instruments . National conference on computational intelligence and signal processing (CISP-2012), pp. 66–70.

  • Deng, J. D., Simmermacher, C., & Cranefield, S. (2008). A study on feature analysis for musical instrument classification. IEEE Transaction on Systems Man and Cybernetics, 38(2), 429–438.

    Article  Google Scholar 

  • Dziubinski, M., Dalka, P., & Kostek, B. (2005). Estimation of musical sound separation algorithm. Journal of Intelligent Information Systems, 24(2–3), 133–157.

    Article  Google Scholar 

  • Eronen, A. (2001). Comparison of features for musical instrument recognition, In proceeding of IEEE workshop on applications of signal processing to audio and acoustic, pp. 19–22.

  • Eronen, A., Klapuri, A. (2000) Musical instrument recognition using cepstral coefficients and temporal features. In Proceedings of the IEEE International conference on acoustics, speech and signal Processing (ICASSP-2000), pp. 753–756. Plymouth, MA.

  • Essid, S., Richard, G., & David, B. (2006a). Hierarchical classification of musical instruments on solo recordings. IEEE International Conference on Acoustics Speech and Signal Processing, (ICASSP-2006)., 5, 14–19.

    Google Scholar 

  • Essid, S., Richard, G., & David, B. (2006b). Musical instrument recognition by pairwise classification strategies. IEEE Trans. on Audio, Speech and Language Processing, 14(4), 1401–1412.

    Article  Google Scholar 

  • Garcia, J., Barbedo, A., & Tzanetakis, G. (2011). Musical Instrument Classification using Individual Partials. IEEE Trans. Audio, Speech Language Processing, 19(1), 111–122.

  • Giannoulis, D., & Klapuri, A. (2013). Musical instrument recognition in polyphonic audio using missing feature approach. IEEE Trans. on Audio, Speech and Language Processing, 21(9), 1805–1817.

    Article  Google Scholar 

  • Goppert, J., and Rosenstiel, W. (1993). Self-organizing maps vs. back-propagation: An experimental study. Proc. of work. design methodol. microelectron. signal process., pp. 153–162.

  • Hu, Y., & Liu, G. (2012). Instrument identification and pitch estimation in multi-timbre polyphonic musical signals based on probabilistic mixture model decomposition. Journal of Intelligent Information Systems, 40(1), 141–158.

    Article  Google Scholar 

  • Jiang, W., & Ras, Z. W. (2013). Multi-label automatic indexing of music by cascade classifiers. Web Intelligence and Agent Systems, International Journal IOS Press, 11(2), 149–170.

    Google Scholar 

  • Kaminskyj, I., & Czaszejko, T. (2005). Automatic recognition of isolated monophonic musical instrument sounds using KNNC. Journal of Intelligent Information Systems, 24(2/3), 199–221.

    Article  Google Scholar 

  • Kolozali, S. Barthet, M., Fazekas, G., Sandler, M. (2011). Knowledge representation issues in musical instrument ontology design. 12th International society for music information retrieval conference (ISMIR 2011).

  • Kostek, B. (1999). Soft computing in acoustics, applications of neural networks, fuzzy logic and rough sets to musical acoustics. Physica verlag, Heidelberg, New York.

  • Kostek, B. (2003). Computing with words concept applied to musical Information retrieval. Electronic Notes in Theoretical Computer Science, 82(4), 141–152.

    Article  Google Scholar 

  • Kostek, B. (2004a). Musical instrument classification and duet analysis employing music information retrieval techniques. Proceedings of the IEEE, 92(4), 712–729.

    Article  Google Scholar 

  • Kostek, B. (2004b). Application of soft computing to automatic music information retrieval. Journal of American Society for Information Science and Technology, 55(12), 1108–1116.

    Article  Google Scholar 

  • Kostek, B. (2007). Applying computational intelligence to musical acoustics. Archives of Acoustics, 32(3), 617–629.

    Google Scholar 

  • Kostek, B., & Kania, L. (2008). Music information analysis and retrieval techniques. Archives of Acoustics, 33(4), 483–496.

    Google Scholar 

  • Kostek, B., & Krolikowski, R. (1997). Application of artificial neural networks to the recognition of musical sounds. Archives of Acoustics, 22(1), 27–50.

    Google Scholar 

  • Kuzmanovski, I., & Novič, M. (2008). Counter-propagation neural networks in Matlab. Chemometrics and Intelligent Laboratory Systems, 90(2008), 84–91.

    Article  Google Scholar 

  • Liu, T., & Li, R. (2005). A new ART-counterpropagation neural network for solving a forecasting problem. Expert system sppl., 28(2005), 21–27.

    Article  Google Scholar 

  • Loughran, R., Walker, J., O’Farrell, M. and O’Neill, M. (2008). The use of mel-frequency cepstral coefficients in musical instrument identification. In proceedings of the international computer music conference, 24–29 August, 2008, Belfast, Northern Ireland

  • Malheiro, F. and Cavaco, S. (2011). Automatic musical instrument and note Recognition, ISMIR 2011.

  • Martin, K.D., Kim, Y.E. (1998). Musical instrument identification: A Pattern recognition approach. Presented at the 136th Meeting of the Acoustical Society of America.

  • Nakamura, T., Kameoka, H., Yoshii, K., and Goto, M. (2014). Timbre replacement of harmonic and drum components, for music audio signals. IEEE International conference on acoustics, speech and signal processing (ICASSP), pp. 7470–7474.

  • Narayan, V. A., & Prabhu, K. M. M. (2003). The fractional fourier transform: theory, implementation and error analysis. International Journal of Microprocessors and Microsystems, 27(10), 511–521.

    Article  Google Scholar 

  • Opolko, F., Wapnick, J. (1987). MUMS—McGill university master samples (in compact discs). Montreal, Canada: McGill University.

  • Ozaktas, H. M., Zalevsky, Z., Kutay, M. A. (2001).The fractional fourier transform with applications in optics and signal processing. John Wiley & Sons.

  • Özbek, M. E., Özkurt, N., & Savacı, F. A. (2011). Wavelet ridges for musical instrument classification. Journal of Intelligent Information Systems, 38(1), 241–256.

    Article  Google Scholar 

  • Salamon, J., Gómez, E., Ellis, D., & Richard, G. (2014). Melody extraction from polyphonic music signals. IEEE Signal Processing Magazine, 31(2), 118–134.

    Article  Google Scholar 

  • Wieczorkowska, A., & Żytkow, J. (2003). Analysis of feature dependencies in sound description. Journal of Intelligent Information Systems, 20(3), 285–302.

    Article  Google Scholar 

  • Wieczorkowska, A., Wrobelewski, J., Synak, P., & Slezak, D. (2003). Application of temporal descriptors to musical instrument sound recognition. Journal of Intelligent Information Systems, 21(1), 71–93.

    Article  Google Scholar 

Download references

Acknowledgments

The authors are very much thankful to the anonymous reviewers for their constructive comments and valuable suggestions to improve quality of this manuscript significantly.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to D. G. Bhalke.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bhalke, D.G., Rao, C.B.R. & Bormane, D.S. Automatic musical instrument classification using fractional fourier transform based- MFCC features and counter propagation neural network. J Intell Inf Syst 46, 425–446 (2016). https://doi.org/10.1007/s10844-015-0360-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10844-015-0360-9

Keywords

Navigation