Difference between revisions of "2012:Query by Singing/Humming"
(→Potential Participants) |
|||
(One intermediate revision by the same user not shown) | |||
Line 133: | Line 133: | ||
== Potential Participants == | == Potential Participants == | ||
− | + | Lei Wang / leiwang.mir@gmail.com | |
+ | <br> | ||
+ | Yaqiong Liu / liuyaq@cn.ibm.com |
Latest revision as of 22:12, 19 July 2012
Contents
Description
The text of this section is copied from the 2010 page. Please add your comments and discussions for 2012.
The goal of the Query-by-Singing/Humming (QBSH) task is the evaluation of MIR systems that take as query input queries sung or hummed by real-world users. More information can be found in:
- 2009:Query_by_Singing/Humming
- 2008:Query_by_Singing/Humming
- 2007:Query_by_Singing/Humming
- 2006:QBSH:_Query-by-Singing/Humming
Subtask 1: Classic QBSH evaluation
This is the classic QBSH problem where we need to find the ground-truth midi from a user's singing or humming.
- Queries: human singing/humming snippets (.wav). Queries are from Roger Jang's corpus and ThinkIT corpus.
- Database: ground-truth and noise MIDI files(which are monophonic). Comprised of 48+106 Roger Jang's and ThinkIT's ground-truth along with a cleaned version of Essen Database(2000+ MIDIs which are used last year)
- Output: top-10 candidate list.
- Evaluation: Top-10 hit rate (1 point is scored for a hit in the top 10 and 0 is scored otherwise).
Subtask 2: Variants QBSH evaluation
This is based on Prof. Downie's idea that queries are variants of "ground-truth" midi. In fact, this becomes more important since user-contributed singing/humming is an important part of the song database to be searched, as evidenced by the QBSH search service at www.midomi.com.
- Queries: human singing/humming snippets (.wav). Queries are from Roger Jang's corpus and ThinkIT corpus.
- Database: human singing/humming snippets (.wav) from all available corpora (excluding the query input being searched).
- Output: top-10 candidate list.
- Evaluation: Top-10 hit rate (1 point is scored for a hit in the top 10 and 0 is scored otherwise).
To make algorithms able to share intermediate steps, participants are encouraged to submit separate tracker and matcher modules instead of integrated ones, which is according to Rainer Typke's suggestion. So trackers and matchers from different submissions could work together with the same pre-defined interface and thus for us it's possible to find the best combination.
Data
Currently we have 2 publicly available corpora for QBSH:
- Roger Jang's MIR-QBSH corpus which is comprised of 4431 queries along with 48 ground-truth MIDI files. All queries are from the beginning of references. Manually labeled pitch for each recording is available.
- IOACAS corpus comprised of 759 queries and 298 monophonic ground-truth MIDI files (with MIDI 0 or 1 format). There are no "singing from beginning" guarantee.
The noise MIDI will be the 5000+ Essen collection(can be accessed from http://www.esac-data.org/).
To build a large test set which can reflect real-world queries, it is suggested that every participant makes a contribution to the evaluation corpus. Sometimes this is hard in practice. So we shall adopt "no hidden dataset" policy if there are not enough user-contributed copora.
Evaluation Corpus Contribution
Every participant will be asked to contribute 100~200 wave queries (8k 16bits) as well as the ground truth MIDI as test data. Please make your contributed data conformed to the format used in the ThinkIT corpus (TITcorpus). These test data will be released after the competition as a public-domain QBSH dataset.
Here is a simple tool for recording query data. You may need to have .NET 2.0 or above installed in your system in order to run this program. The generated files conform to the format used in the ThinkIT corpus. Of course you are also welcomed to use your own program to record the query data.
If there are not enough user-contributed corpora, then we shall adopt "no hidden dataset" policy for QBSH task as usual.
Submission Format
Breakdown Version
The following was based on the suggestion by Xiao Wu last year with some modifications.
1. Database indexing/building. Command format should look like this:
indexing %dbMidi.list% %dir_workspace_root%
where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example:
./QBSH/midiDatabase/00001.mid ./QBSH/midiDatabase/00002.mid ./QBSH/midiDatabase/00003.mid ./QBSH/midiDatabase/00004.mid ...
Output indexed files are placed into %dir_workspace_root%. (For task 2, %dbMidi.list% is in fact a list of wav files in the database.)
2. Pitch tracker. Command format:
pitch_tracker %queryWave.list% %dir_query_pitch%
where %queryWave.list% looks like
queryWave/query_00001.wav queryWave/query_00002.wav queryWave/query_00003.wav ...
Each input file dir_query/query_xxxxx.wav in %queryWave.list% outputs a corresponding transcription %dir_query_pitch%/query_xxxxx.pitch which gives the pitch sequence in midi note scale with the resolution of 10ms:
0 0 62.23 62.25 62.21 ...
Thus a query with x seconds should output a pitch file with 100*x lines. Places of silence/rest are set to be 0.
3. Pitch matcher. Command format:
pitch_matcher %dbMidi.list% %queryPitch.list% %resultFile%
where %queryPitch.list% looks like
queryPitch/query_00001.pitch queryPitch/query_00002.pitch queryPitch/query_00003.pitch ...
and the result file gives top-10 candidates(if has) for each query:
queryPitch/query_00001.pitch: 00025 01003 02200 ... queryPitch/query_00002.pitch: 01547 02313 07653 ... queryPitch/query_00003.pitch: 03142 00320 00973 ... ...
Integrated Version
If you want to pack everything together, the command format should be much simpler:
qbshProgram %dbMidi.list% %queryWave.list% %resultFile% %dir_workspace_root%
You can use %dir_workspace_root% to store any temporary indexing/database structures. The result file should have the same format as mentioned previously. (For task 2, %dbMidi.list% is in fact a list of wav files in the database to be retrieved.)
Packaging submissions
All submissions should be statically linked to all libraries (the presence of dynamically linked libraries cannot be guarenteed).
All submissions should include a README file including the following the information:
- Command line calling format for all executables and an example formatted set of commands
- Number of threads/cores used or whether this should be specified on the command line
- Expected memory footprint
- Expected runtime
- Any required environments (and versions), e.g. python, java, bash, matlab.
Time and hardware limits
Due to the potentially high number of particpants in this and other audio tasks, hard limits on the runtime of submissions are specified.
A hard limit of 72 hours will be imposed on runs (total feature extraction and querying times). Submissions that exceed this runtime may not receive a result.
Potential Participants
Lei Wang / leiwang.mir@gmail.com
Yaqiong Liu / liuyaq@cn.ibm.com