2008:Real-time Audio to Score Alignment (a.k.a. Score Following) Results

From MIREX Wiki
Revision as of 13:54, 7 June 2010 by IMIRSELBot (talk | contribs) (Robot: Automated text replacement (-/mirex/2008/results/ +/mirex/results/2008/))
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Introduction

These are the results for the 2008 running of the Real-time Audio to Score Alignment (a.k.a Score Following) task. For background information about this task set please refer to the 2008:Real-time Audio to Score Alignment (a.k.a Score Following) page.

General Legend

Team ID

MO1 = N. Montecchio & Orio 1
MO2 = N. Montecchio & Orio 2
RM1 = R. Macrae
RM2 = R. Macrae

Summary Results

MO1 MO2 RM1 RM2
Piecewise Precision (MO GT) 84.45% 68.84% 17.10% 19.50%
Piecewise Precision (RM GT) 48.55% 41.67% 25.34% 26.19%
Ave. Piecewise Precision 66.50% 55.26% 21.22% 22.85%

download these results as csv

Individual Results

MO = N. Montecchio & Orio
RM = R. Macrae

Summary Results w.r.t R. Macrae`s Evaluation Script

MO1 MO2 RM1 RM2
Piecewise Precision (MO GT) 81.59% 65.63% 24.27% 17.03%
Piecewise Precision (RM GT) 25.93% 25.36% 44.77% 28.80%
Ave. Piecewise Precision 53.76% 45.50% 34.52% 22.92%

download these results as csv

Individual Results w.r.t R. Macrae`s Evaluation Script

MO = N. Montecchio & Orio
RM = R. Macrae


The systems are evaluated against the ground truth that is prepared by parsing the score files by each systems own midi parser (MO GT, RM GT).


Issues with ground-truth