Advertisement

A Theoretic Approach to Music Genre Recognition from Musical Features Using Single-Layer Feedforward Neural Network

  • Sourav DasEmail author
  • Anup Kumar Kolya
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 755)

Abstract

Musical genres are categorical classifications that are used to distinguish between different types of music. Each genre differs from other genres in certain musical features. In pre-computational intelligence era, music genre categorization has traditionally been performed manually, mostly due to the lack of modern human—computer interaction concept, and obviously for the lack of enough computational processing abilities of the computers. However, with the ever-increasing number of digital music and vast features, genre recognition using Neural Network is producing a wide range of results across a variety of experiments recently. By studying and extracting information on such features, with applying relevant Neural Network algorithm and technique, and also exploring some new recognition techniques on the same dataset which has been used in established research works, we hope to discover and gather new information about genre classification, and further understand future potential directions and prospects that could improve the art of computational musical genre recognition, decomposition of the clustered data corpus, and as a whole construction thereafter.

Keywords

Music genre recognition Artificial neural network Single-Layer feedforward neural network Unsupervised learning 

References

  1. 1.
    Porter, A., Bogdanov, D., Kaye, R., Tsukanov, R., Serra, X.: Acousticbrainz: a community platform for gathering music information obtained from audio. In: International Society for Music Information Retrieval Conference, Oct 2015Google Scholar
  2. 2.
    Tzanetakis, G., Cook, P.: Musical genre classification of audio signals. IEEE Trans. Speech Audio Process. 10(5), 293–302 (2002)CrossRefGoogle Scholar
  3. 3.
    McKay, C.: Automatic Genre Classification of MIDI Recordings. Doctoral Dissertation, McGill University (2004)Google Scholar
  4. 4.
    Silla Jr., C.N., Koerich, A.L., Kaestner, C.A.: A machine learning approach to automatic music genre classification. J. Braz. Comput. Soc. 14(3), 7–18 (2008)CrossRefGoogle Scholar
  5. 5.
    Hamel, P., Eck, D.: August learning features from music audio with deep belief networks. In: ISMIR, pp. 339–344 (2010)Google Scholar
  6. 6.
    Dieleman, S., Brakel, P., Schrauwen, B.: Audio-based music classification with a pretrained convolutional network. In: 12th International Society for Music Information Retrieval Conference (ISMIR-2011), pp. 669–674. University of Miami (2011)Google Scholar
  7. 7.
    Bertin-Mahieux, T., Ellis, D.P., Whitman, B. Lamere, P.: The million song dataset. Ismir. 2(9), p. 10 (2011) Google Scholar
  8. 8.
    Humphrey, E.J., Bello, J.P., LeCun, Y.: Moving beyond feature design: deep architectures and automatic feature learning in music informatics. In: ISMIR. pp. 403–408 Oct 2012Google Scholar
  9. 9.
    Kons, Z., Toledo-Ronen, O.: Audio event classification using deep neural networks. In: INTERSPEECH. pp. 1482–1486 (2013)Google Scholar
  10. 10.
    Sigtia, S., Dixon, S.: Improved music feature learning with deep neural networks. In: Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pp. 6959–6963. IEEE, May 2014Google Scholar
  11. 11.
    Dai, J., Liu, W., Ni, C., Dong, L., Yang, H.: Multilingual deep neural network for music genre classification. In: Sixteenth Annual Conference of the International Speech Communication Association (2015)Google Scholar
  12. 12.
    Jeong, I.Y., Lee, K.: Learning temporal features using a deep neural network and its application to music genre classification. In: ISMIR. pp. 434–440 (2016)Google Scholar
  13. 13.
    Choi, K., Fazekas, G., Sandler, M., Cho, K.: Convolutional recurrent neural networks for music classification. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2392–2396. IEEE, Mar 2017Google Scholar
  14. 14.
    Amiri, N.: Natural Language Processing and Music Information Retrieval (2016)Google Scholar
  15. 15.
    Sturm, B.L.: The GTZAN dataset: its contents, its faults, their effects on evaluation, and its future use (2013). arXiv:1306.1461
  16. 16.
    Tavşanoğlu, V.: Neural Networks. Yıldız Technical University, TurkeyGoogle Scholar
  17. 17.
    Karlik, B., Olgac, A.V.: Performance analysis of various activation functions in generalized MLP architectures of neural networks. Int. J. Artif. Intell. Expert Syst. 1(4), 111–122 (2011)Google Scholar
  18. 18.
    Sathya, R., Abraham, A.: Comparison of supervised and unsupervised learning algorithms for pattern classification. Int. J. Adv. Res. Artif. Intell. 2(2), 34–38 (2013)CrossRefGoogle Scholar
  19. 19.
    Li, T., Ogihara, M. Li, Q.: A comparative study on content-based music genre classification. In: Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, pp. 282–289. ACM, July 2003Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  1. 1.Deptartment of CSE, RCC Institute of Information TechnologyKolkataIndia

Personalised recommendations