Advertisement

Polyphonic Music Generation by Modeling Temporal Dependencies Using a RNN-DBN

  • Kratarth Goel
  • Raunaq Vohra
  • J. K. Sahoo
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8681)

Abstract

In this paper, we propose a generic technique to model temporal dependencies and sequences using a combination of a recurrent neural network and a Deep Belief Network. Our technique, RNN-DBN, is an amalgamation of the memory state of the RNN that allows it to provide temporal information and a multi-layer DBN that helps in high level representation of the data. This makes RNN-DBNs ideal for sequence generation. Further, the use of a DBN in conjunction with the RNN makes this model capable of significantly more complex data representation than a Restricted Boltzmann Machine (RBM). We apply this technique to the task of polyphonic music generation.

Keywords

Deep architectures recurrent neural networks music generation creative machine learning Deep Belief Networks generative models 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Matic, I., Oliveira, A., Cardoso, A.: Automatic melody generation using neural networks and cellular automata. In: 2012 11th Symposium on Neural Network Applications in Electrical Engineering (NEUREL), pp. 89–94 (September 2012)Google Scholar
  2. 2.
    Maeda, Y., Kajihara, Y.: Rhythm generation method for automatic musical composition using genetic algorithm. In: 2010 IEEE International Conference on Fuzzy Systems (FUZZ), pp. 1–7 (July 2010)Google Scholar
  3. 3.
    Coca, A., Romero, R., Zhao, L.: Generation of composed musical structures through recurrent neural networks based on chaotic inspiration. In: The 2011 International Joint Conference on Neural Networks (IJCNN), pp. 3220–3226 (July 2011)Google Scholar
  4. 4.
    Bickerman, G., Bosley, S., Swire, P., Keller, R.: Learning to create jazz melodies using deep belief nets. In: First International Conference on Computational Creativity (2010)Google Scholar
  5. 5.
    Boulanger-Lewandowski, N., Bengio, Y., Vincent, P.: Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription. In: ICML. icml.cc / Omnipress (2012)Google Scholar
  6. 6.
    Hinton, G.E.: Training products of experts by minimizing contrastive divergence. Neural Computation 14(8), 1771–1800 (2002)CrossRefzbMATHMathSciNetGoogle Scholar
  7. 7.
    Hinton, G.E., Osindero, S.: A fast learning algorithm for deep belief nets. Neural Computation 18, 2006 (2006)Google Scholar
  8. 8.
    Bengio, Y.: Learning deep architectures for ai. Foundations and Trends in Machine Learning 2(1), 1–127 (2009)CrossRefzbMATHMathSciNetGoogle Scholar
  9. 9.
    Sutskever, I., Hinton, G.E., Taylor, G.W.: The recurrent temporal restricted boltzmann machine. In: NIPS, pp. 1601–1608 (2008)Google Scholar
  10. 10.
    Rumelhart, D., Hinton, G., Williams, R.: Learning representations by back-propagating errors. Nature 323(6088), 533–536 (1986)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Kratarth Goel
    • 1
  • Raunaq Vohra
    • 1
  • J. K. Sahoo
    • 1
  1. 1.Birla Institute of Technology and SciencePilaniIndia

Personalised recommendations