2010:Query by Singing/Humming

From MIREX Wiki


The text of this section is copied from the 2009 page. Please add your comments and discussions for 2010.

The goal of the Query-by-Singing/Humming (QBSH) task is the evaluation of MIR systems that take as query input queries sung or hummed by real-world users. More information can be found in:

Subtask 1: Classic QBSH evaluation

This is the classic QBSH problem where we need to find the ground-truth midi from a user's singing or humming.

  • Queries: human singing/humming snippets (.wav). Queries are from Roger Jang's corpus and ThinkIT corpus.
  • Database: ground-truth and noise MIDI files(which are monophonic). Comprised of 48+106 Roger Jang's and ThinkIT's ground-truth along with a cleaned version of Essen Database(2000+ MIDIs which are used last year)
  • Output: top-10 candidate list.
  • Evaluation: Top-10 hit rate (1 point is scored for a hit in the top 10 and 0 is scored otherwise).

Subtask 2: Variants QBSH evaluation

This is based on Prof. Downie's idea that queries are variants of "ground-truth" midi. In fact, this becomes more important since user-contributed singing/humming is an important part of the song database to be searched, as evidenced by the QBSH search service at www.midomi.com.

  • Queries: human singing/humming snippets (.wav). Queries are from Roger Jang's corpus and ThinkIT corpus.
  • Database: human singing/humming snippets (.wav) from all available corpora (excluding the query input being searched).
  • Output: top-10 candidate list.
  • Evaluation: Top-10 hit rate (1 point is scored for a hit in the top 10 and 0 is scored otherwise).

To make algorithms able to share intermediate steps, participants are encouraged to submit separate tracker and matcher modules instead of integrated ones, which is according to Rainer Typke's suggestion. So trackers and matchers from different submissions could work together with the same pre-defined interface and thus for us it's possible to find the best combination.


Currently we have 3 publicly available corpora for QBSH:

  • Roger Jang's MIR-QBSH corpus which is comprised of 4431 queries along with 48 ground-truth MIDI files. All queries are from the beginning of references. Manually labeled pitch for each recording is available.
  • IOACAS corpus 1 comprised of 355 queries and 106 monophonic ground-truth MIDI files (with MIDI 0 or 1 format). There are no "singing from beginning" gurantee.
  • IOACAS corpus 2 comprised of 404 queries and 192 monophonic ground-truth MIDI files. There are no "singing from beginning" gurantee.

The noise MIDI will be the 5000+ Essen collection(can be accessed from http://www.esac-data.org/).

To build a large test set which can reflect real-world queries, it is suggested that every participant makes a contribution to the evaluation corpus. Sometimes this is hard in practice. So we shall adopt "no hidden dataset" policy if there are not enough user-contributed copora.

Evaluation Corpus Contribution

Every participant will be asked to contribute 100~200 wave queries (8k 16bits) as well as the ground truth MIDI as test data. Please make your contributed data conformed to the format used in the ThinkIT corpus (TITcorpus). These test data will be released after the competition as a public-domain QBSH dataset.

Here is a simple tool for recording query data. You may need to have .NET 2.0 or above installed in your system in order to run this program. The generated files conform to the format used in the ThinkIT corpus. Of course you are also welcomed to use your own program to record the query data.

If there are not enough user-contributed corpora, then we shall adopt "no hidden dataset" policy for QBSH task as usual.

Submission Format

Breakdown Version

The following was based on the suggestion by Xiao Wu last year with some modifications.

1. Database indexing/building. Command format should look like this:

indexing %dbMidi.list% %dir_workspace_root%

where %dbMidi.list% is the input list of database midi files named as uniq_key.mid. For example:


Output indexed files are placed into %dir_workspace_root%. (For task 2, %dbMidi.list% is in fact a list of wav files in the database.)

2. Pitch tracker. Command format:

pitch_tracker %queryWave.list% %dir_query_pitch%

where %queryWave.list% looks like


Each input file dir_query/query_xxxxx.wav in %queryWave.list% outputs a corresponding transcription %dir_query_pitch%/query_xxxxx.pitch which gives the pitch sequence in midi note scale with the resolution of 10ms:


Thus a query with x seconds should output a pitch file with 100*x lines. Places of silence/rest are set to be 0.

3. Pitch matcher. Command format:

pitch_matcher %dbMidi.list% %queryPitch.list% %resultFile%

where %queryPitch.list% looks like


and the result file gives top-10 candidates(if has) for each query:

queryPitch/query_00001.pitch: 00025 01003 02200 ... 
queryPitch/query_00002.pitch: 01547 02313 07653 ... 
queryPitch/query_00003.pitch: 03142 00320 00973 ... 

Integrated Version

If you want to pack everything together, the command format should be much simpler:

qbshProgram %dbMidi.list% %queryWave.list% %resultFile% %dir_workspace_root%

You can use %dir_workspace_root% to store any temporary indexing/database structures. The result file should have the same format as mentioned previously. (For task 2, %dbMidi.list% is in fact a list of wav files in the database to be retrieved.)

Packaging submissions

All submissions should be statically linked to all libraries (the presence of dynamically linked libraries cannot be guarenteed).

All submissions should include a README file including the following the information:

  • Command line calling format for all executables and an example formatted set of commands
  • Number of threads/cores used or whether this should be specified on the command line
  • Expected memory footprint
  • Expected runtime
  • Any required environments (and versions), e.g. python, java, bash, matlab.

Time and hardware limits

Due to the potentially high number of particpants in this and other audio tasks, hard limits on the runtime of submissions are specified.

A hard limit of 72 hours will be imposed on runs (total feature extraction and querying times). Submissions that exceed this runtime may not receive a result.

Submission opening date


Submission closing date