2009:Query-by-Singing/Humming Results
Contents
Introduction
These are the results for the 2008 running of the Query-by-Singing/Humming task. For background information about this task set please refer to the Query by Singing/Humming page.
Task Descriptions
Task 1 Goto Task 1 Results: The first subtask is the same as last year. In this subtask, submitted systems take a sung query as input and return a list of songs from the test database. Mean reciprocal rank (MRR) of the ground truth, as well as the simple hit(1)/miss(0) counting, is calculated over the top 10 returns. Two data sets are used:
- #Task 1, Jang's dataset Results Roger Jang's MIR-QBSH corpus with 48 songs as ground truth + 2000 Essen Collection MIDI noise files. See ESAC Data Homepage for more information about the Essen Collection. The queries consists of 4431 humming. All queries are from the beginning of references
- #Task 1, ThinkIT's dataset Results IOACAS corpus 1 data set with 106 songs as ground truth + 2000 Essen Collection MIDI noise files. See ESAC Data Homepage for more information about the Essen Collection. The queries consists of 355 humming. There are no "singing from beginning" gurantee.
Task 2 Goto Task 2 Results: The second subtask is the query against other humming. In the second subtask, Roger Jang's MIR-QBSH corpus has been divided into two groups (2040 as queries and 2391 as database). The query is performed against the other humming database and the top 10 closed are returned. The score is simple count of how many returns belong to the same ground truth song.
General Legend
Team ID
CSJ1 = Chun-Ta Chen and Jyh-Shing Roger Jang matched by beginning of the sond
CSJ2 = Chun-Ta Chen and Jyh-Shing Roger Jang matched by anywhere of the song
HAFR = Pierre Hanna, Julien Allali, Pascal Ferraro and Matthias Robine
Task 1 Results
Task 1, Jang's dataset Results
Task 1 Overall Results
file /nema-raid/www/mirex/results/qbsh/qbsh_task1_summary.csv not found
Task 1 Friedman's Test for Significant Differences
The Friedman test was run in MATLAB against the QBSH Task 1 MRR data over the 48 ground truth song groups. Command: [c,m,h,gnames] = multcompare(stats, 'ctype', 'tukey-kramer','estimate', 'friedman', 'alpha', 0.05); file /nema-raid/www/mirex/results/qbsh/qbsh.task1.friedman_detailed.csv not found
File:Qbsh.task1.friedman.small.png
Task 1 Summary Results by Query Group
file /nema-raid/www/mirex/results/qbsh/qbsh.task1.res.byQueryGroup.csv not found
Task 1, ThinkIT's dataset Results
Task 2 Results
In this subtask, the same setup as the first subtask used with combination of different transcribers and matchers. The test databases consists of 106 ground-truth MIDIS + 2000 Essen Collection MIDI noise files. The query databases consists of 355 sung queries.
Task 2 Overall Results
file /nema-raid/www/mirex/results/qbsh/qbsh.task2.summary.csv not found
Task 2 Friedman's Test for Significant Differences
The Friedman test was run in MATLAB against the QBSH Task 1 MRR data over the 48 ground truth song groups. Command: [c,m,h,gnames] = multcompare(stats, 'ctype', 'tukey-kramer','estimate', 'friedman', 'alpha', 0.05); file /nema-raid/www/mirex/results/qbsh/qbsh.task2.friedman_detailed.csv not found
File:Qbsh.task2.friedman.s.png
Task 2 Summary Results by Query Group
file /nema-raid/www/mirex/results/qbsh/qbsh_task2_res_byQueryGroup.csv not found
Runtime Results
file /nema-raid/www/mirex/results/qbsh.runtime.csv not found
Introduction
Task Descriptions
General Legend
Team ID
CSJ = Chun-Ta Chen and Jyh-Shing Roger Jang
HAFR1 = Pierre Hanna, Julien Allali, Pascal Ferraro and Matthias Robine
HAFR2 = Pierre Hanna, Julien Allali, Pascal Ferraro and Matthias Robine
HL = Shu-Jen Show and Hsiao Tyne Liang