2014:Audio Fingerprinting

From MIREX Wiki
Revision as of 20:01, 6 August 2014 by Jyh-Shing Roger Jang (talk | contribs) (Time and hardware limits)

Description

This task is audio fingerprinting, also known as query by (exact but noisy) examples. Several companies have launched services based on such technology, including Shazam, Soundhound, Intonow, Viggle, etc. Though the technology has been around for years, there is no benchmark dataset for evaluation. This task is the first step toward building an extensive corpus for evaluating methodologies in audio fingerprinting.

Data

Database

  • 10,000 songs (*.mp3) in the database, in which there is exact one song corresponding to each query. (That is, there is no out-of-vocabulary query in the query set.) This dataset is hidden and not available for download.

Query set

The query set has two parts:

  • 4630 clips of wav format: These are hidden and not available for download
  • 1062 clips of wav format: These recordings are noisy versions of George's music genre dataset. You can download the query set via this link

All the query set is mono recordings of 8-12 sec, with 44.1 KHz sampling rate and 16-bit resolution. The set was obtained via different brands of smartphone, at various locations with various kinds of environmental noise.

Evaluation Procedures

The evaluation is based on the query set (two parts), with top-1 hit rate being the performance index.

Submission Format

Participants are required to submit a breakdown version of the algorithm, which includes the following two parts:

1. Database Builder

Command format:

builder %fileList4db% %dir4db%

where %fileList4db% is a file containing the input list of database audio files, with name convention as uniqueKey.wav. For example:

./AFP/database/00001.mp3
./AFP/database/00002.mp3
./AFP/database/00003.mp3
./AFP/database/00004.mp3
...

The output file(s), which containing all the information of the database to be used for audio fingerprinting, should be placed placed into the directory %dir4db%. The size of the database file(s) is restricted to a certain amount, as explained next.

2. Matcher

Command format:

matcher %fileList4query% %dir4db% %resultFile%

where %fileList4query% is a file containing the list of query clips. For example:

./AFP/query/q0001.wav
./AFP/query/q0002.wav
./AFP/query/q0003.wav
./AFP/query/q0004.wav
...

The result file gives retrieved result for each query, with the format:

%queryFilePath%	%dbFilePath%

where these two fields are separated by a tab. Here is a more specific example:

./AFP/query/q0001.wav	./AFP/database/00004.mp3
./AFP/query/q0002.wav	./AFP/database/00054.mp3
./AFP/query/q0003.wav	./AFP/database/01002.mp3
..

Time and hardware limits

Due to the fact that more features extracted for AFP almost always lead to better accuracy, we need to put hard limits on runtime and storage. (The limits of runtime and storage also put a limit of memory usage implicitly.) The time/storage limits of different steps are shown in the following table:

Steps Time limit Storage limit
builder 1/100 of real time (For instance, if a music clip has a duration of 4 minutes, it should take 2.4 sec to construct the database in average. For a database of 10000 songs, the total time should around 10000*4/60/100 = 6.7 hours.) 50KB for 1 minute of music. (For a database of 10000 songs, the total storage for database should be around 50*10000*4/1000000 = 2GB.)
matcher 2 secs for each query of around 10 seconds (Thus for 5692 queries of around 10 seconds, the total query time should be around 2*5692/3600 = 3.2 hours.) None

Submissions that exceed these limitations may not receive a result.

Potential Participants

Discussion

name / email

Bibliography