We have built a generic audio database to be used as the testbed of the proposed algorithms, which consists of the following contents: 1000 clips of environmental audio including the sounds of applause, animal, footstep, raining, explosion, knocking, vehicles and so on; 100 pieces of classical music played with 10 kinds of instruments, 100 other music pieces of different styles (classic, jazz, blues, light music, Chinese and Indian folk music, etc.); 50 clips of songs sung by male, female, or children, with or without musical instrument accompaniment; 200 speech pieces in different languages (English, German, French, Spanish, Japanese, Chinese, etc.) and with different levels of noise; 50 clips of speech with the music background; 40 clips of environmental sound with the music background; and 20 samples of silence segment with different types of low-volume noise (clicks, brown noise, pink noise and white noise). These short pieces of sound clips (with duration from several seconds to more than one minute) are used to test the audio classification performances. We also collected dozens of longer audio clips recorded from movies or video programs. These pieces last from several minutes to half an hour, and contain various types of audio. They are used to test the performances for audiovisual data segmentation and indexing.
KeywordsSound Effect News Item Index Table Environmental Sound Music Background
Unable to display preview. Download preview PDF.