This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.
- 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) 1,000 of the files are from GTZAN data set, all the others are mainly English and Chinese pop songs. This data set is hidden and not available for download. (Note that there are possibly different numbers of channels (mono and stereo), sampling rates, and bit resolutions for these files.)
The query set has two parts:
- 4630 clips of wav format: These are hidden and not available for download
- 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via this link
All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphones, at various locations with various kinds of environmental noise.
The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.
Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:
1. Database Builder
builder %fileList4db% %dir4db%
where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.wav. For example:
./AFP/database/000001.mp3 ./AFP/database/000002.mp3 ./AFP/database/000003.mp3 ./AFP/database/000004.mp3 ...
The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed placed into the directory %dir4db%. The total size of the database file(s) is restricted to a certain amount, as explained next.
matcher %fileList4query% %dir4db% %resultFile%
where %fileList4query% is a file containing the list of query clips. For example:
./AFP/query/q000001.wav ./AFP/query/q000002.wav ./AFP/query/q000003.wav ./AFP/query/q000004.wav ...
The result file gives retrieved result for each query, with the format:
where these two fields are separated by a tab. Here is a more specific example:
./AFP/query/q000001.wav ./AFP/database/0000004.mp3 ./AFP/query/q000002.wav ./AFP/database/0000054.mp3 ./AFP/query/q000003.wav ./AFP/database/0001002.mp3 ..
Time and hardware limits
Due to the fact that more features extracted for AFP almost always lead to better accuracy, we need to put hard limits on runtime and storage. (The limits of runtime and storage also put a limit of memory usage implicitly.) The time/storage limits of different steps are shown in the following table:
|Steps||Time limit||Storage limit|
|builder||1/50 of real time (For instance, if a music clip has a duration of 4 minutes, it should take 4.8 sec to construct the database in average. For a database of 10000 songs, the total time should around 10000*4/60/50 = 13.33 hours.)||50KB for 1 minute of music. (For a database of 10000 songs, the total storage for database should be around 50*10000*4/1000000 = 2GB.)|
|matcher||2 secs for each query of around 10 seconds (Thus for 5692 queries of around 10 seconds, the total query time should be around 2*5692/3600 = 3.2 hours.)||None|
Submissions that exceed these limitations may not receive a result.
name / email