Difference between revisions of "2010:Symbolic Music Similarity and Retrieval"

From MIREX Wiki
(Submission Format)
Line 1: Line 1:
 
  
  
Line 17: Line 16:
  
 
All in MIDI format.  
 
All in MIDI format.  
 +
 +
==Evaluation ==
 +
=== Human Evaluation ===
 +
The primary evaluation will involve subjective judgments by human evaluators of the retrieved sets using IMIRSEL's Evalutron 6000 system.
 +
 +
* Evaluator question: Given a search based on track A, the following set of results was returned by all systems. Please place each returned track into one of three classes (not similar, somewhat similar, very similar) and provide an inidcation on a continuous scale of 0 - 10 of high similar the track is to the query.
 +
* ~6 queries,  10 results per query, 1 set of eyes, ~10 participating labs
 +
* Higher number of queries preferred as IR research indicates variance is in queries
 +
* It will be possible for researchers to use this data for other types of system comparisons after MIREX 2007 results have been finalized.
 +
* Human evaluation to be designed and led by IMIRSEL following a similar format to that used at MIREX 2006
 +
* Human evaluators will be drawn from the participating labs (and any volunteers from IMIRSEL or on the MIREX lists)
  
  

Revision as of 15:53, 26 May 2010


Task suggestion: Symbolic Melodic Similarity

Description

Given a query, each system is supposed to return 10 most melodically similar songs from a given collection.

Data

3 different datasets are use for 3 subtasks.

  • RISM (monophonic; 10,000)
  • Karoke (polyphonic; 1,000)
  • Mixed (polyphonic; 15,741).

All in MIDI format.

Evaluation

Human Evaluation

The primary evaluation will involve subjective judgments by human evaluators of the retrieved sets using IMIRSEL's Evalutron 6000 system.

  • Evaluator question: Given a search based on track A, the following set of results was returned by all systems. Please place each returned track into one of three classes (not similar, somewhat similar, very similar) and provide an inidcation on a continuous scale of 0 - 10 of high similar the track is to the query.
  • ~6 queries, 10 results per query, 1 set of eyes, ~10 participating labs
  • Higher number of queries preferred as IR research indicates variance is in queries
  • It will be possible for researchers to use this data for other types of system comparisons after MIREX 2007 results have been finalized.
  • Human evaluation to be designed and led by IMIRSEL following a similar format to that used at MIREX 2006
  • Human evaluators will be drawn from the participating labs (and any volunteers from IMIRSEL or on the MIREX lists)


Submission Format

Inputs/Outputs

Input: Parameters: - the name of a directory containing about MIDI files - the name of one MIDI file containing a monophonic query.

The program will be called 6 times. Three of the queries are going to be quantized (produced from symbolic notation) and three produced by humming or whistling, thus with slight rhythmic and pitch deviations.

Output: - a list of the names of the 10 most similar matching MIDI files, ordered by melodic similarity. Write the file name in separate lines, without empty lines in between.



Building the ground truth

Unlike last year, it is now nearly impossible to manually build a proper ground truth in advance.

Because of that, after the algorithms have been submitted, their results are going to be pooled for every query, and every participant is going to be asked to judge the relevance of the matches for some queries. To make that a manageable burden, it is important that the algorithms do not only return the names of the matching MIDI files for task 2, but also where the matching bit starts and ends in the matching MIDI file. We can then automatically extract those matching bits and put them into small new MIDI files whose relevance can then be quickly checked.

Measures

Use the same measures as [last year] to compare the search results of the various algorithms.