Advertisement

Music Search and Recommendation

  • Karlheinz Brandenburg
  • Christian Dittmar
  • Matthias Gruhne
  • Jakob Abeßer
  • Hanna Lukashevich
  • Peter Dunker
  • Daniel Gärtner
  • Kay Wolter
  • Holger Grossmann
Chapter

Abstract

In the last ten years, our ways to listen to music have drastically changed: In earlier times, we went to record stores or had to use low bit-rate audio coding to get some music and to store it on PCs. Nowadays, millions of songs are within reach via on-line distributors. Some music lovers already got terabytes of music on their hard disc. Users are now no longer desparate to get music, but to select, to find the music they love. A number of technologies has been developed to adress these new requirements. There are techniques to identify music and ways to search for music. Recommendation today is a hot topic as well as organizing music into playlists.

Keywords

Gaussian Mixture Model Collaborative Filter Audio Feature Linear Predictive Code Music Information Retrieval 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Abeßer, J., Dittmar, C., Großmann, H.: Automatic genre and artist classification by analyzing improvised solo parts from musical recordings. In: Proceedings of the Audio Mostly Conference (AMC). Piteå, Sweden (2008)Google Scholar
  2. 2.
    Allamanche, E., Herre, J., Hellmuth, O., Kastner, T., Ertel, C.: A multiple feature model for music similarity retrieval. In: Proceedings of the 4th International Symposium of Music Information Retrieval (ISMIR). Baltimore, Maryland, USA (2003)Google Scholar
  3. 3.
    Allamanche, E., Herre, J., Helmuth, O., Froba, B., Kastner, T., Cremer, M.: Content-based identification of audio material using MPEG-7 low level description. In: Proceedings of the 2nd International Symposium of Music Information Retrieval (ISMIR). Bloomington, Indiana, USA (2001)Google Scholar
  4. 4.
    Anderson, C.: The Long Tail: Why the Future of Business is Selling Less of More. Hyperion, New York, NY, USA (2006)Google Scholar
  5. 5.
    Aucouturier, J.J., Defreville, B., Pachet, F.: The bag-of-frame approach to audio pattern recognition: A sufficient model for urban soundscapes but not for polyphonic music. Journal of the Acoustical Society of America 122(2), 881–891 (2007)CrossRefGoogle Scholar
  6. 6.
    Aucouturier, J.J., Pachet, F.: Music similarity measures: What’s the use? In: Proceedings of the 3rd International Conference on Music Information Retrieval (ISMIR). Paris, France (2002)Google Scholar
  7. 7.
    Aucouturier, J.J., Pachet, F.: Improving timbre similarity: How high is the sky? Journal of Negative Results in Speech and Audio Sciences 1(1), 1–13 (2004)Google Scholar
  8. 8.
    Aucouturier, J.J., Pachet, F., Sandler, M.: The way it sounds: timbre models for analysis and retrieval of music signals. IEEE Transactions on Multimedia 7(6), 1028–1035 (2005)CrossRefGoogle Scholar
  9. 9.
    Bainbridge, D., Cunningham, S., Downie, J.: Visual collaging of music in a digital library. In: Proceedings of the International Conference on Music Information Retrieval (ISMIR). Barcelona, Spain (2004)Google Scholar
  10. 10.
    Bastuck, C., Dittmar, C.: An integrative framework for content-based music similarity retrieval. In: Proceedings of the 35th German Annual Conference on Acoustics (DAGA). Dresden, Germany (2008)Google Scholar
  11. 11.
    Bello, J.P., Pickens, J.: A robust mid-level representation for harmonic content in music signals. In: Proceedings of the 6th International Conference on Music Information Retrieval (ISMIR). London, UK (2005)Google Scholar
  12. 12.
    Brown, J.: Determination of the meter of musical scores by autocorrelation. Journal of the Acoustical Society of America 94(4), 1953–1957 (1993)CrossRefGoogle Scholar
  13. 13.
    Casey, M.: MPEG-7 sound recognition. IEEE Transactions on Circuits and Systems Video Technology, special issue on MPEG-7 11, 737–747 (2001)CrossRefGoogle Scholar
  14. 14.
    Celma, O.: Music recommendation and discovery in the long tail. Ph.D. thesis, Universitat Pompeu Fabra, Barcelona, Spain (2008)Google Scholar
  15. 15.
    Chen, P.H., Cheh-Jen, L., Schölkopf, B.: A turorial on ν-support vector machines. Tech. rep., Department of Computer Science and Information Engineering, Taipei, Max Planck Institute for Biological Cybernetics, Tübingen (2005)Google Scholar
  16. 16.
    Cunningham, S., Caulder, S., Grout, V.: Saturday night or fever? Context aware music playlists. In: Proceeding of the Audio Mostly Conference (AMC). Piteå, Sweden (2008)Google Scholar
  17. 17.
    Cunningham, S., Zhang, Y.: Development of a music organizer for children. In: Proceedings of the 9th International Conference on Music Information Retrieval (ISMIR). Philadelphia, Pennsylvania (2008)Google Scholar
  18. 18.
    Dempster, A.P., Laird, N.M., Rdin, D.B.: Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society, Series B 39, 1–38 (1977)MATHGoogle Scholar
  19. 19.
    Dittmar, C., Bastuck, C., Gruhne, M.: Novel mid-level audio features for music similarity. In: Proc. of the Intern. Conference on Music Communication Science (ICOMCS). Sydney, Australia (2007)Google Scholar
  20. 20.
    Dittmar, C., Dressler, K., Rosenbauer, K.: A toolbox for automatic transcription of polyphonic music. In: Proceedings of the Audio Mostly Conference (AMC). Ilmenau, Germany (2007)Google Scholar
  21. 21.
    Dittmar, C., Uhle, C.: Further steps towards drum transcription of polyphonic music. In: Proceedings of the AES 116th Convention (2004)Google Scholar
  22. 22.
    Dixon, S.: Onset detection revisited. In: Proceedings of the 9th International Conference on Digital Audio Effects (DAFx06). Montréal, Québec, Canada (2006)Google Scholar
  23. 23.
    Dunker, P., Nowak, S., Begau, A., Lanz, C.: Content-based mood classification for photos and music: A generic multi-modal classification framework and evaluation approach. In: Proceedings of the International Conference on Multimedia Information Retrieval (ACM MIR). Vancouver, Canada (2008)Google Scholar
  24. 24.
    Eck, D., Bertin-Mahieux, T., Lamere, P.: Autotagging music using supervised machine learning. In: Proceedings of the 8th International Conference on Music Information Retrieval (ISMIR). Vienna, Austria (2007)Google Scholar
  25. 25.
    Eerola, T., North, A.C.: Expectancy-based model of melodic complexity. In: Proceedings of the 6th International Conference of Music Perception and Cognition (ICMPC). Keele, Staffordshire, England (2000)Google Scholar
  26. 26.
    Ellis, D.: Classifying music audio with timbral and chroma features. In: Proceedings of the 8th International Conference on Music Information Retrieval (ISMIR). Vienna, Austria (2007)Google Scholar
  27. 27.
    Feng, Y., Zhuang, Y., Pan, Y.: Music information retrieval by detecting mood via computational media aesthetics. International Conference onWeb Intelligence (IEEE/WIC) pp. 235–241 (2003)Google Scholar
  28. 28.
    Flexer, A., Pampalk, E., Widmer, G.: Hidden markov models for spectral similarity of songs. In: Proceedings of the 8th International Conference on Digital Audio Effects (DAFX’05). Madrid, Spain (2008)Google Scholar
  29. 29.
    Foote, J.: Visualizing music and audio using self-similarity. In: Proceedings of the seventh ACM international conference on Multimedia (Part 1). New York, NY, USA (1999)Google Scholar
  30. 30.
    Foote, J.T.: Content-based retrieval of music and audio. In: Proceeding of SPIE Conference on Multimedia Storage and Archiving Systems II. Dallas, TX, USA (1997)Google Scholar
  31. 31.
    Fukunaga, K.: Introduction to Statistical Pattern Recognition, Second Edition (Computer Science and Scientific Computing Series). Academic Press (1990)Google Scholar
  32. 32.
    Gillet, O., Richard, G.: Enst-drums: an extensive audio-visual database for drum signals processing. In: Proceedings of the 7th International Conference on Music Information Retrieval (ISMIR). Victoria, BC, Canada (2006)Google Scholar
  33. 33.
    Goto, M.: A real-time music-scene-description system - predominant-f0 estimation for detecting melody and bass lines in real-world audio signals. Speech Communication 43, 311–329 (2004)CrossRefGoogle Scholar
  34. 34.
    Goto, M.: AIST annotation for the RWC music database. In: Proceedings of the 7th International Conference on Music Information Retrieval (ISMIR). Victoria, BC, Canada (2006)Google Scholar
  35. 35.
    Goussevskaia, O., Kuhn, M., Lorenzi, M., Wattenhofer, R.: From Web to Map: Exploring the World of Music. In: Proceedings of IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT). Sydney, Australia (2008)Google Scholar
  36. 36.
    Gouyon, F., Fabig, L., Bonada, J.: Rhythmic expressiveness transformations of audio recordings - swing modifications. In: Proceedings of the 60th International Conference on Digital Audio Effects (DAFx). London, UK (2003)Google Scholar
  37. 37.
    Gouyon, F., Herrera, P.: Determination of the meter of musical audio signals: Seeking recurrences in beat segment descriptors. In: Proceedings of the 114th AES Convention. Amsterdam, Netherlands (2003)Google Scholar
  38. 38.
    Gouyon, F., Klapuri, A., Dixon, S., Alonso, M., Tzanetakis, G., Uhle, C., Cano, P.: An experimental comparison of audio tempo induction algorithms. IEEE Transactions on Speech and Audio Processing 14, 1832–1844 (2006)CrossRefGoogle Scholar
  39. 39.
    Gouyon, F., Pachet, F., Delerue, O.: The use of zerocrossing rate for an application of classification of percussive sounds. In: COST G-6 Conference on Digital Audio Effects (DAFx). Verona, Italy (2000)Google Scholar
  40. 40.
    Hainsworth, S.W., Macleod, M.D.: Automatic bass line transcription from polyphonic music. In: Proceedings of the International Computer Music Conference (ICMC). Havana, Cuba (2001)Google Scholar
  41. 41.
    Hanjalic, A.: Extracting moods from pictures and sounds. IEEE Signal Processing Magazine 23(2), 90–100 (2006)CrossRefGoogle Scholar
  42. 42.
    Harte, C.A., Sandler, M.B.: Automatic chord identification using a quantised chromagram. In: Proceedings of the 118th AES Convention. Barcelona, Spain (2005)Google Scholar
  43. 43.
    Herre, J., Allamanche, E., Ertel, C.: How similar do songs sound? In: Proceedings of the IEEE Workshop on Applications of Singal Processing to Audio and Acoustics (WASPAA). Mohonk, New York, USA (2003)Google Scholar
  44. 44.
    Herrera, P., Sandvold, V., Gouyon, F.: Percussion-related semantic descriptors of music audio files. In: Proceedings of the 25th International AES Conference. London, UK (2004)Google Scholar
  45. 45.
    Hevner, K.: Experimental studies of the elements of expression in music. American Journal of Psychology 48(2), 246–268 (1936)CrossRefGoogle Scholar
  46. 46.
    Hilliges, O., Holzer, P., Kluber, R., Butz, A.: AudioRadar: A metaphorical visualization for the navigation of large music collections. Lecture Notes in Computer Science 4073, 82 (2006)CrossRefGoogle Scholar
  47. 47.
    Hiraga, R., Mizaki, R., Fujishiro, I.: Performance visualization: a new challenge to music through visualization. In: Proceedings of the 10th ACM international conference on Multimedia. New York, NY, USA (2002)Google Scholar
  48. 48.
    Hsu, C., Chang, C., Lin, C., et al.: A practical guide to support vector classification. Tech. rep., National Taiwan University, Taiwan (2003)Google Scholar
  49. 49.
    Hsu, J.L., Liu, C.C., Chen, A.L.P.: Discovering nontrivial repeating patterns in music data. IEEE Transactions on Multimedia 3(3), 311–325 (2001)CrossRefGoogle Scholar
  50. 50.
    Hu, X., Downie1, J.S., Laurier, C., Bay, M., Ehmann, A.F.: The 2007 MIREX Audio Mood Classification Task: Lessons Learned. In: Proceedings of the 9th International Conference on Music Information Retrieval (ISMIR). Philadelphia, Pennsylvania, USA (2008)Google Scholar
  51. 51.
    Hunt, M.J., Lennig, M., Mermelstein, P.: Experiments in syllable-based recognition of continuous speech. In: Proceedings of the International Conference on Acoustics and Signal Processing (ICASSP). Denver, Colorado, USA (1980)Google Scholar
  52. 52.
    Isaacson, E.: What you see is what you get: on visualizing music. In: Proceedings of the International Conference on Music Information Retrieval. London, UK (2005)Google Scholar
  53. 53.
    ISO/IEC: ISO/IEC 15938-4 (MPEG-7 Audio). ISO (2002)Google Scholar
  54. 54.
    Jennings, D.: Net, Blogs and Rock ’n’ Roll: How Digital Discovery Works and What it Means for Consumers. Nicholas Brealey Publishing (2007)Google Scholar
  55. 55.
    Johnston, J.: Transform coding of audio signals using perceptual noise criteria. IEEE Journal on Selected Areas in Communications 6(2), 314–322 (1988)CrossRefGoogle Scholar
  56. 56.
    Kim, Y., Whitman, B.: Singer identification in popular music recordings using voice coding features. In: Proceedings of 3rd International Symposium on Music Information Retrieval (ISMIR). Paris, France (2002)Google Scholar
  57. 57.
    Kolhoff, P., Preuß, J., Loviscach, J.: Content-based icons for music files. Computers & Graphics 32(5), 550–560 (2008)CrossRefGoogle Scholar
  58. 58.
    Kullback, S.: Information Theory and Statistics (Dover Books on Mathematics). Dover Publications (1997)Google Scholar
  59. 59.
    de Léon, P.J.P., Inesta, J.M.: Pattern recognition approach for music style identification using shallow statistical descriptors. IEEE Transactions on System, Man and Cybernetics - Part C : Applications and Reviews 37(2), 248–257 (2007)CrossRefGoogle Scholar
  60. 60.
    Lew, M.S., Sebe, N., Lifl, C.D., Jain, R.: Content-based multimedia information retrieval: State of the art and challenges. ACM Transactions on Multimedia Computing, Communications, and Applications (2006)Google Scholar
  61. 61.
    Li, T., Ogihara, M.: Detecting emotion in music. Proceedings of the Fifth International Symposium on Music Information Retrieval pp. 239–240 (2003)Google Scholar
  62. 62.
    Li, T., Ogihara, M.: Content-based music similarity search and emotion detection. Proceedings IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). 5 (2004)Google Scholar
  63. 63.
    Licklider, J., Pollack, I.: Effects of differentiation, integration, and infinite peak clipping on the intelligibility of speech. Journal Acoustical Society of America 20, 42–51 (1948)CrossRefGoogle Scholar
  64. 64.
    Lidy, T., Rauber, A., Pertusa, A., Iesta, J.M.: Improving genre classification by combination of audio and symbolic descriptors using a transcription system. In: Proceedings of the 8th International Conference on Music Information Retrieval (ISMIR). Vienna, Austria (2007)Google Scholar
  65. 65.
    Lillie, A.S.: Musicbox: Navigating the space of your music. Master’s thesis, Massachusetts Institute of Technology, USA (2008)Google Scholar
  66. 66.
    Liu, B.: Web Data Mining: Exploring Hyperlinks, Contents, and Usage Data. Springer, New York, NY, USA (2008)Google Scholar
  67. 67.
    Liu, C., Yang, Y., Wu, P., Chen, H.: Detecting and classifying emotion in popular music. In: 9th Joint International Conference on Information Sciences (2006)Google Scholar
  68. 68.
    Liu, D., Lu, L., Zhang, H.: Automatic mood detection from acoustic music data. In: Proceedings International Symposium Music Information Retrieval (ISMIR), pp. 81–87 (2003)Google Scholar
  69. 69.
    Liu, Z., Huang, Q.: Content-based indexing and retrieval-by-example in audio. In: Proceedings of IEEE International Conference on Multimedia and Expo (ICME). New York City, NY, USA (2000)Google Scholar
  70. 70.
    Logan, B.: Mel frequency cepstral coefficients for music modeling. In: Proceedings of 1st International Symposium on Music Information Retrieval (ISMIR). Plymouth, Massachusetts, USA (2000)Google Scholar
  71. 71.
    Logan, B., Salomon, A.: A music similarity function based on signal analysis. In: Proceedings of IEEE International Conference on Multimedia and Expo (ICME). Tokyo, Japan (2001)Google Scholar
  72. 72.
    Lu, L., Liu, D., Zhang, H.: Automatic mood detection and tracking of music audio signals. IEEE Transactions on Audio, Speech & Language Processing 14(1), 5–18 (2006)CrossRefMathSciNetGoogle Scholar
  73. 73.
    Lukashevich, H., Dittmar, C.: Applying statistical models and parametric distance measures for music similarity search. In: Proceedings of the 32nd Annual Conference of German Classification Society. Hamburg, Germany (2008)Google Scholar
  74. 74.
    Madsen, S.T., Widmer, G.: A complexity-based approach to melody track identification in midi files. In: Proceedings of the International Workshop on Artificial Intelligence and Music (MUSIC-AI). Hyderabad, India (2007)Google Scholar
  75. 75.
    Magno, T., Sable, C.: A comparison of signal-based music recommendation to genre labels, collaborative filtering, musicological analysis, human recommendation, and random baseline. In: Proceedings of the 9th International Conference on Music Information Retrieval (ISMIR). Philadelphia, USA (2008)Google Scholar
  76. 76.
    Mandel, M.I., Poliner, G.E., Ellis, D.P.: Support vector machine active learning for music retrieval. Multimedia Systems 12, 1–11 (2006)CrossRefGoogle Scholar
  77. 77.
    de Mántaras, R.L., Arcos, J.L.: AI and music: From composition to expressive performances. AI Magazine 23, 43–57 (2002)Google Scholar
  78. 78.
    McKay, C., Fujinaga, I.: Automatic genre classification using large high-level musical feature sets. In: Proceedings of the International Conference in Music Information Retrieval (ISMIR). Barcelona, Spain (2004)Google Scholar
  79. 79.
    Mierswa, I., Morik, K.: Automatic feature extraction for classifying audio data. Machine Learning Journal 58, 127–149 (2005)MATHCrossRefGoogle Scholar
  80. 80.
    Mörchen, F., Ultsch, A., Nöcker, M., Stamm, C.: Databionic visualization of music collections according to perceptual distance. In: Proceedings of the 6th International Conference on Music Information Retrieval (ISMIR). London, UK (2005)Google Scholar
  81. 81.
    Moschou, V., Kotti, M., Benetos, E., Kotropoulos, C.: Systematic comparison of BIC-based speaker segmentation systems. In: Proceedings of IEEE 9th Workshop on Multimedia Signal Processing (MMSP). Crete, Greece (2007)Google Scholar
  82. 82.
    Müller, M., Appelt, D.: Path-constrained partial music synchronization. In: Proc. International Conference on Acoustics, Speech, and Signal Processing (ICASSP). Las Vegas, USA (2008)Google Scholar
  83. 83.
    Neumayer, R., Dittenbach, M., Rauber, A.: PlaySOM and PocketSOMPlayer, alternative interfaces to large music collections. In: Proceedings of the 6th International Conference on Music Information Retrieval (ISMIR). London, UK (2005)Google Scholar
  84. 84.
    Nowak, S., Bastuck, C., Dittmar, C.: Exploring music collections through automatic similarity visualization. In: Tagungsband der DAGA Fortschritte der Akustik. Dresden, Germany (2008)Google Scholar
  85. 85.
    Pampalk, E.: Computational models of music similarity and their application in music information retrieval. Ph.D. thesis, Vienna University of Technology, Vienna, Austria (2006)Google Scholar
  86. 86.
    Pampalk, E., Pohle, T., Widmer, G.: Dynamic playlist generation based on skipping behaviour. In: Proceedings of the 6th International Conference on Music Information Retrieval (ISMIR). London, UK (2005)Google Scholar
  87. 87.
    Pampalk, E., Rauber, A., Merkl, D.: Content-based organization and visualization of music archives. In: Proceedings of the 10th ACM international conference on Multimedia. New York, NY, USA (2002)Google Scholar
  88. 88.
    Peeters, G.: A large set of audio features for sound description (similarity and classification) in the CUIDADO project. Tech. Rep. CUIDADO I.S.T. Project, Institut de Recherche et Coordination Acoustique/Musique (IRCAM), Paris, France (2004)Google Scholar
  89. 89.
    Poliner, G.E., Ellis, D.P.W., Ehmann, A.F., Gómez, E., Streich, S., Ong, B.: Melody transcription from music audio: Approaches and evaluation. IEEE Transactions on Audio, Speech, and Language Processing 15, 1247–1256 (2007)CrossRefGoogle Scholar
  90. 90.
    Raimond, Y.: A distributed music information system. Ph.D. thesis, Queen Mary, University of London, London, UK (2008)Google Scholar
  91. 91.
    Russell, J.: A circumplex model of affect. Journal of Personality and Social Psychology 39(6), 1161–1178 (1980)CrossRefGoogle Scholar
  92. 92.
    Ryyänen, M., Klapuri, A.: Automatic bass line transcription from streaming polyphonic audio. In: Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). Honolulu, Hawaii, USA (2007)Google Scholar
  93. 93.
    Ryynnen, M.P., Klapuri, A.P.: Automatic transcription of melody, bass line, and chords in polyphonic music. Computer Music Journal 32, 72–86 (2008)CrossRefGoogle Scholar
  94. 94.
    Saunders, C., Hardoon, D.R., Shawe-Taylor, J., Widmer, G.: Using string kernels to identify famous performers from ther playing style. In: Proceedings of the 15th European Conference on Machine Learning (ECML). Pisa, Italy (2004)Google Scholar
  95. 95.
    Schein, A.I., Popescul, R., Ungar, L.H., Pennock, D.M.: Methods and metrics for cold-start recommendations. In: Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. Tampere, Finland (2002)Google Scholar
  96. 96.
    Schuller, B., Eyben, F., Rigoll, G.: Tango or waltz?: Putting ballroom dance style into tempo detection. EURASIP Journal on Audio, Speech, and Music Processing (JASMP) 2008(6), 1–12 (2008)CrossRefGoogle Scholar
  97. 97.
    Serra, J., Gomez, E.: Audio cover song identification based on tonal sequence alignment. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal processing (ICASSP). Las Vegas, USA (2008)Google Scholar
  98. 98.
    Sethares, W., Staley, T.: Meter and periodicity in musical performance. Journal of New Music Research 30(2), 149–158 (2001)CrossRefGoogle Scholar
  99. 99.
    Shao, X., Xu, C., Kankanhalli, M.: Unsupervised classification of music genre using hidden markov model. In: Proceedings of the IEEE International Conference on Multimedia and Expo (ICME). Edinburgh, Scotland,United Kingdom (2004)Google Scholar
  100. 100.
    Smith, G.: Tagging: People-Powered Metadata for the Social Web. New Riders, Berkeley, CA, USA (2008)Google Scholar
  101. 101.
    Sordo, M., Celma, Ó., Blech, M., Guaus, E.: The quest for musical genres: Do the experts and the wisdom of crowds agree? In: Proceedings of the Ninth International Conference on Music Information Retrieval (ISMIR). Philadelphia, Pennsylvania, USA (2008)Google Scholar
  102. 102.
    Tellegen, A., Watson, D., Clark, L.: On the dimensional and hierarchical structure of affect. Psychological Science 10, 297–303 (1999)CrossRefGoogle Scholar
  103. 103.
    Thayer, R.: The Biopsychology of Mood and Arousal. Oxford University Press (1989)Google Scholar
  104. 104.
    Tiemann, M., Pauws, S., Vignoli, F.: Ensemble learning for hybrid music recommendation. In: Proceedings of the 8th International Conference on Music Information Retrieval (ISMIR). Vienna, Austria (2007)Google Scholar
  105. 105.
    Tolos, M., Tato, R., Kemp, T.: Mood-based navigation through large collections of musical data. In: 2nd IEEE Consumer Communications and Networking Conference. Las Vegas, Nevada, USA (2005)Google Scholar
  106. 106.
    Torrens, M., Hertzog, P., Arcos, J.: Visualizing and exploring personal music libraries. In: Proceedings of the International Conference on Music Information Retrieval (ISMIR). Barcelona, Spain (2004)Google Scholar
  107. 107.
    Trohidis, K., Tsoumakas, G., Kalliris, G., Vlahavas, I.: Multilabel classification of music into emotions. In: Proceedings of the 9th International Conference on Music Information Retrieval (ISMIR). Philadelphia, Pennsylvania, USA (2008)Google Scholar
  108. 108.
    Tsai, W., Wang, H.: Automatic singer recognition of popular music recordings via estimation and modeling of solo vocal signals. IEEE Transactions on Audio, Speech, and Language Processing 14(1), 330–431 (2006)CrossRefGoogle Scholar
  109. 109.
    Tzanetakis, G.: Manipulation, analysis and retrieval systems for audio signals. Ph.D. thesis, Princeton University, NJ, USA (2002)Google Scholar
  110. 110.
    Tzanetakis, G., Cook, P.: Musical genre classification of audio signals. IEEE transactions on Speech and Audio Processing 10(5), 293–302 (2002)CrossRefGoogle Scholar
  111. 111.
    Uhle, C.: Automatisierte extraktion rhythmischer merkmale zur anwendung in music information retrieval-systemen. Ph.D. thesis, Ilmenau University, Ilmenau, Germany (2008)Google Scholar
  112. 112.
    Wang, M., Zhang, N., Zhu, H.: User-adaptive music emotion recognition. In: 7th International Conference on Signal Processing, vol. 2, pp. 1352–1355 (2004)CrossRefGoogle Scholar
  113. 113.
    Webb, A.: Statistical Pattern Recognition, 2nd edn. John Wiley and Sons Ltd. (2002)Google Scholar
  114. 114.
    West, K., Cox, S.: Features and classifiers for the automatic classification of musical audio signals. In: Proceedings of the 5th International Conference on Music Information Retrieval (ISMIR). Barcelona, Spain (2004)Google Scholar
  115. 115.
    Wolter, K., Bastuck, C., Gärtner, D.: Adaptive user modeling for content-based music retrieval. In: Proceedings of the 6th Workshop on Adaptive Multimedia Retrieval (AMR). Paris, France (2008)Google Scholar
  116. 116.
    Wu, T., Jeng, S.: Probabilistic estimation of a novel music emotion model. In: 14th International Multimedia Modeling Conference. Springer (2008)Google Scholar
  117. 117.
    Yang, D., Lee, W.: Disambiguating music emotion using software agents. In: Proc. of the International Conference on Music Information Retrieval (ISMIR). Barcelona, Spain (2004)Google Scholar
  118. 118.
    Yoshii, K., Goto, M.: Music thumbnailer: Visualizing musical pieces in thumbnail images based on acoustic features. In: Proceedings of the 9th International Conference on Music Information Retrieval (ISMIR). Philadelphia, Pennsylvania, USA (2008)Google Scholar
  119. 119.
    Yoshii, K., Goto, M., Komatani, K., Ogata, T., Okuno, H.G.: Hybrid collaborative and content-based music recommendation using probabilistic model with latent user preferences. In: Proceedings of the 7th International Conference on Music Information Retrieval (ISMIR). Victoria, BC, Canada (2006)Google Scholar
  120. 120.
    Yoshii, K., Goto, M., Okuno, H.G.: Automatic drum sound description for real-world music using template adaption and matching methods. In: Proceedings of the 5th International Music Information Retrieval Conference (ISMIR). Barcelona, Spain (2004)Google Scholar
  121. 121.
    Zils, A., Pachet, F.: Features and classifiers for the automatic classification of musical audio signals. In: Proceedings of the 5th International Conference on Music Information Retrieval (ISMIR). Barcelona, Spain (2004)Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  • Karlheinz Brandenburg
    • 1
  • Christian Dittmar
    • 1
  • Matthias Gruhne
    • 1
  • Jakob Abeßer
    • 1
  • Hanna Lukashevich
    • 1
  • Peter Dunker
    • 1
  • Daniel Gärtner
    • 1
  • Kay Wolter
    • 1
  • Holger Grossmann
    • 1
  1. 1.Fraunhofer IDMTIlmenauGermany

Personalised recommendations