Difference between revisions of "2008:Query by Tapping"

From MIREX Wiki
(Running for the query files)
(Interested Participants)
Line 44: Line 44:
 
== Interested Participants ==
 
== Interested Participants ==
 
* Shu-Jen Show Hsiao(show.cs95g at nctu.edu.tw)
 
* Shu-Jen Show Hsiao(show.cs95g at nctu.edu.tw)
 +
* Rainer Typke: I would be interested if the query data can also be made available in symbolic form so we can see what part of the performance comes from good onset detection from audio, and what comes from a good matching algorithm.

Revision as of 13:58, 13 August 2008

Overview

The main purpose of QBT(Query by Tapping) task is to evaluate MIR system in retrieval ground-truth MIDI files by the tapping rhythm. Prof. J-S Roger Jang has record 272 query rhythm files(15 seconds rhythm excerpts in mono WAV format) to retrieve a 6892 MIDI files collection. Evaluation database and query files can be download soon.

Task description

  • Test database: 6892 ground-truth mono MIDI files.
  • Quries: 272 query files to retrieve 105 known target drawn from 6892 collection. So far, 1~6 human assessors have listened and tapped a 15 seconds query rhythm from beginning for each target songs.
  • Evaluation: Mean Reciprocal rank. Return top 20 candidates for each query file.

Data processing proposal for calling formats

Indexing the MIDIs collection

Indexing_exe <var1> <var2>

where

<var1>==<directory_of_MIDIs> 
<var2>==<indexing_files_output_and_working_directory>

Running for the query files

Running_exe <var3> <var4> <var5>

where

<var3>==<directory_of_indexed_file> 
<var4>==<directory_of_query_rhythm_files> 
<var5>==<answer_file_output_directory> 

Data processing for output answer file formats

The answer file for each run would look like:

Q001:0003,0567,0999,<insert X more responses>,XXXX
Q002:0103,0567,0998,<insert X more responses>,XXXX
Q00X:0002,0567,0999,<insert X more responses>,XXXX

Each line represents to each of the queries in a given task run.

Submission closing date

22th August 2008.

Interested Participants

  • Shu-Jen Show Hsiao(show.cs95g at nctu.edu.tw)
  • Rainer Typke: I would be interested if the query data can also be made available in symbolic form so we can see what part of the performance comes from good onset detection from audio, and what comes from a good matching algorithm.