Skip to main content

Part of the book series: Studies in Computational Intelligence ((SCI,volume 83))

The liquid state machine (LSM) is a relatively new recurrent neural network (RNN) architecture for dealing with time-series classification problems. The LSMhas some attractive properties such as a fast training speed compared with more traditional RNNs, its biological plausibility, and its ability to deal with highly nonlinear dynamics. This paper presents the democratic LSM, an extension of the basic LSM that uses majority voting by combining two dimensions. First, instead of only giving the classification at the end of the time-series, multiple classifications after different time-periods are combined. Second, instead of using a single LSM, multiple ensembles are combined. The results show that the democratic LSM significantly outperforms the basic LSM and other methods on two music composer classification tasks where the goal is to separate Haydn/Mozart and Beethoven/Bach, and a music instrument classification problem where the goal is to distinguish between a flute and a bass guitar.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Maass, W., Natschläger, T., Markram, H.: Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Computation 14 (2002) 2531-2560

    Article  MATH  Google Scholar 

  2. Jaeger, H.: The ‘echo state’ approach to analyzing and training recurrent neural networks. GMD report 148 (2001)

    Google Scholar 

  3. Hochreiter, S.: Recurrent neural net learning and vanishing gradient. International Journal Of Uncertainity, Fuzziness and Knowledge-Based Systems 6 (1998) 107-116

    Article  MATH  Google Scholar 

  4. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Computation 9 (1997) 1735-1780

    Article  Google Scholar 

  5. Bakker, B., Zhumatiy, V., Gruener, G., Schmidhuber, J.: A robot that reinforcement-learns to identify and memorize important previous observations. In: Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2003) (2003) 430-435

    Google Scholar 

  6. Breiman, L.: Bagging predictors. Machine Learning 24 (1996) 123-140

    MATH  MathSciNet  Google Scholar 

  7. Ruta, D., Gabrys, B.: A theoretical analysis of the limits of majority voting errors for multiple classifier systems. Technical Report 11, ISSN 1461-6122, Department of Computing and Information Systems, University of Paisley (2000)

    Google Scholar 

  8. Williams, R., Zipser, D.: A learning algorithm for continually running fully recurrent neural networks. Neural Computation 1 (1989) 270-280

    Article  Google Scholar 

  9. Rumelhart, D., Hinton, G., Williams, R.: Learning internal representations by error propagation. Parallel Distributed Processing 1 (1986) 318-362

    Google Scholar 

  10. Bengio, Y., Simard, P., Frasconi, P.: Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks 5 (1994) 157-166

    Article  Google Scholar 

  11. Pearlmutter, B.: Gradient calculations for dynamic recurrent neural networks: a survey. IEEE Transactions on Neural Networks 6 (1995) 1212-1228

    Article  Google Scholar 

  12. Williams, R., Zipser, D.: Gradient-based learning algorithms for recurrent net-works and their computational complexity. In Chauvin, Y., Rumelhart, D., eds.: Back-propagation: theory, architectures and applications. Lawrence Erlbaum, Hillsdale, NJ (1995) 433-486

    Google Scholar 

  13. Lin, T., Horne, B., Tino, P., Giles, C.: Learning long-term dependencies is not as difficult with NARX networks. In: Touretzky, D., Mozer, M., Hasselmo, M., eds.: Advances in Neural Information Processing Systems, Volume 8. MIT, Cambridge, MA (1996) 577-583

    Google Scholar 

  14. Natschläger, T., Bertschinger, N., Legenstein, R.: At the edge of chaos: Real-time computations and self-organized criticality in recurrent neural networks. In: Proceedings of Neural Information Processing Systems, Volume 17 (2004) 145-152

    Google Scholar 

  15. Gupta, A., Wang, Y., Markram, H.: Organizing principles for a diversity of GABAergic interneurons and synapses in the neocortex. Science 287 (2000) 273-278

    Article  Google Scholar 

  16. Vreeken, J.: On real-world temporal pattern recognition using liquid state machines. Master’s thesis, Utrecht University, University of Zürich (2004)

    Google Scholar 

  17. Häusler, S., Markram, H., Maass, W.: Perspectives of the high dimensional dynamics of neural microcircuits from the point of view of low dimensional readouts. Complexity (Special Issue on Complex Adaptive Systems) 8 (2003) 39-50

    Google Scholar 

  18. Narasimhamurthy, A.: Theoretical bounds of majority voting performance for a binary classification problem. IEEE Transactions on Pattern Analysis and Machine Intelligence 27 (2005) 1988-1995

    Article  Google Scholar 

  19. Lam, L., Suen, C.Y.: Application of majority voting to pattern recognition: an analysis of its behaviour and performance. IEEE Transactions on Systems, Man, and Cybernetics 27 (1997) 553-568

    Article  Google Scholar 

  20. Feng, J., Brown, D.: Integrate-and-fire models with nonlinear leakage. Bulletin of Mathematical Biology 62 (2000) 467-481

    Article  Google Scholar 

  21. Vapnik, V.: Statistical learning theory. Wiley, New York (1998)

    MATH  Google Scholar 

  22. Sapp, C., Liu, Y.: The Haydn/Mozart string quartet quiz. Center for Computer Assisted Research in the Humanities at Stanford University. Available on http://qq.themefinder.org/ (2006)

  23. Pantev, C., Hoke, M., Lutkenhoner, B., Lehnertz, K.: Tonotopic organization of the auditory cortex: pitch versus frequency representation. Science 246 (1989) 486-488

    Article  Google Scholar 

  24. Pantev, C., Hoke, M., Lutkenhoner, B., Lehnertz, K.: Neuromagnetic evidence of functional organization of the auditory cortex in humans. Acta Otolaryngol Supply 491 (1991) 106-115

    Article  Google Scholar 

  25. Fernando, C., Sojakka, S.: Pattern recognition in a bucket. In: Proceedings of the 7th European Conference on Artificial Life (2003) 588-597

    Google Scholar 

  26. Essid, S., Richard, G., David, B.: Musical instrument recognition based on class pairwise feature selection. In: ISMIR Proceedings 2004 (2004) 560-568

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Pape, L., de Gruijl, J., Wiering, M. (2008). Democratic Liquid State Machines for Music Recognition. In: Prasad, B., Prasanna, S.R.M. (eds) Speech, Audio, Image and Biomedical Signal Processing using Neural Networks. Studies in Computational Intelligence, vol 83. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-75398-8_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-75398-8_9

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-75397-1

  • Online ISBN: 978-3-540-75398-8

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics