We have built a generic audio database to be used as the testbed of the proposed algorithms, which consists of the following contents: 1000 clips of environmental audio including the sounds of applause, animal, footstep, raining, explosion, knocking, vehicles and so on; 100 pieces of classical music played with 10 kinds of instruments, 100 other music pieces of different styles (classic, jazz, blues, light music, Chinese and Indian folk music, etc.); 50 clips of songs sung by male, female, or children, with or without musical instrument accompaniment; 200 speech pieces in different languages (English, German, French, Spanish, Japanese, Chinese, etc.) and with different levels of noise; 50 clips of speech with the music background; 40 clips of environmental sound with the music background; and 20 samples of silence segment with different types of low-volume noise (clicks, brown noise, pink noise and white noise). These short pieces of sound clips (with duration from several seconds to more than one minute) are used to test the audio classification performances. We also collected dozens of longer audio clips recorded from movies or video programs. These pieces last from several minutes to half an hour, and contain various types of audio. They are used to test the performances for audiovisual data segmentation and indexing.


Sound Effect News Item Index Table Environmental Sound Music Background 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer Science+Business Media New York 2001

Authors and Affiliations

  • Tong Zhang
    • 1
  • C.-C. Jay Kuo
    • 2
  1. 1.Integrated Media Systems CenterUniversity of Southern CaliforniaLos AngelesUSA
  2. 2.Department of Electrical Engineering — SystemsUniversity of Southern CaliforniaLos AngelesUSA

Personalised recommendations