2010:Symbolic Melodic Similarity Results
- 1 Introduction
- 2 General Legend
- 3 Summary Results
- 4 Raw Scores
These are the results for the 2010 running of the Symbolic Melodic Similarity task set. For background information about this task set please refer to the 2010:Symbolic Melodic Similarity page.
Each system was given a query and returned the 10 most melodically similar songs from those taken from the Essen Collection (5274 pieces in the MIDI format; see ESAC Data Homepage for more information). For each query, we made four classes of error-mutations, thus the set comprises the following query classes:
- 0. No errors
- 1. One note deleted
- 2. One note inserted
- 3. One interval enlarged
- 4. One interval compressed
For each query (and its 4 mutations), the returned results (candidates) from all systems were then grouped together (query set) for evaluation by the human graders. The graders were provide with only heard perfect version against which to evaluate the candidates and did not know whether the candidates came from a perfect or mutated query. Each query/candidate set was evaluated by 1 individual grader. Using the Evalutron 6000 system, the graders gave each query/candidate pair two types of scores. Graders were asked to provide 1 categorical score with 3 categories: NS,SS,VS as explained below, and one fine score (in the range from 0 to 100).
Evalutron 6000 Summary Data
Number of evaluators = 6
Number of evaluations per query/candidate pair = 1
Number of queries per grader = 1
Total number of candidates returned = 3900
Total number of unique query/candidate pairs graded = 895
Average number of query/candidate pairs evaluated per grader: 149
Number of queries = 6 (perfect) with each perfect query error-mutated 4 different ways = 30
|Sub code||Submission name||Abstract||Contributors|
|HFRA1||SMS simbals||Pierre Hanna, Pascal Ferraro, Matthias Robine, Julien Allali|
|JU1||SMS-Domain||Julián Urbano, Juan Lloréns, Jorge Morato, Sonia Sánchez-Cuadrado|
|JU2||SMS-PitchDeriv||Julián Urbano, Juan Lloréns, Jorge Morato, Sonia Sánchez-Cuadrado|
|JU3||SMS-ParamDeriv||Julián Urbano, Juan Lloréns, Jorge Morato, Sonia Sánchez-Cuadrado|
|JU4||SMS-Shape||Julián Urbano, Juan Lloréns, Jorge Morato, Sonia Sánchez-Cuadrado|
|LL1||CbrahmsS2||Mika Laitinen, Kjell Lemström|
|LL2||CbrahmsW2||Mika Laitinen, Kjell Lemström|
|RI1||UAC||David Rizo, José Manuel Iñesta|
|RI2||UAT||David Rizo, José Manuel Iñesta|
|RI3||UAT3||David Rizo, José Manuel Iñesta|
|RI4||UAPR||David Rizo, José Manuel Iñesta|
|SU1||NGR5||Iman Suyoto, Alexandra Uitdenbogerd|
|SU2||PIOI||Iman Suyoto, Alexandra Uitdenbogerd|
NS = Not Similar
SS = Somewhat Similar
VS = Very Similar
ADR = Average Dynamic Recall
NRGB = Normalize Recall at Group Boundaries
AP = Average Precision (non-interpolated)
PND = Precision at N Documents
Calculating Summary Measures
Fine(1) = Sum of fine-grained human similarity decisions (0-100).
PSum(1) = Sum of human broad similarity decisions: NS=0, SS=1, VS=2.
WCsum(1) = 'World Cup' scoring: NS=0, SS=1, VS=3 (rewards Very Similar).
SDsum(1) = 'Stephen Downie' scoring: NS=0, SS=1, VS=4 (strongly rewards Very Similar).
Greater0(1) = NS=0, SS=1, VS=1 (binary relevance judgment).
Greater1(1) = NS=0, SS=0, VS=1 (binary relevance judgment using only Very Similar).
(1)Normalized to the range 0 to 1.
Overall Scores (Includes Perfect and Error Candidates)
Scores by Query Error Types
Friedman Test with Multiple Comparisons Results (p=0.05)
The Friedman test was run in MATLAB against the Fine summary data over the 30 queries.
Command: [c,m,h,gnames] = multcompare(stats, 'ctype', 'tukey-kramer','estimate', 'friedman', 'alpha', 0.05); <csv>2010/sms/sms_fine_scores_friedman.csv</csv>
The raw data derived from the Evalutron 6000 human evaluations are located on the 2010:Symbolic Melodic Similarity Raw Data page.