2015:Audio Chord Estimation Results
From MIREX Wiki
This page contains the results of the 2015 edition of the MIREX automatic chord estimation tasks. This edition was the third one since the reorganization of the evaluation procedure in 2013. The results can therefore be directly compared to those of the last two years. Chord labels are evaluated according to five different chord vocabularies and the segmentation is also assessed. Additional information about the used measures can be found on the page of the 2013 edition.
- A new data set, called "JayChou 2015" has been donated by Junqi Deng of the University of Hong Kong. It consists of 29 Mandopop songs taken from various albums by Jay Chou. Most of the songs are ballads and special attention has been paid to the annotation of extended chords and inversions. Because Junqi was kind enough to provide this set before official publication, the algorithmic output on these files and their ground-truth has been withheld for the time being. Also the file names in the per track results have been anonymized.
- The algorithmic output and per track results on the Isophonics set now display the unmasked song name, such that an evaluation per artist/album can be performed.
All software used for the evaluation has been made open-source. The evaluation framework is described by Pauwels and Peeters (2013). The corresponding code repository can be found on GitHub and the used measures are available as presets. The raw algorithmic output provided below makes it possible to calculate the additional measures from the paper (separate results for tetrads, etc.), in addition to those presented below. More help can be found in the readme.
|CM3 (Chordino)||Chris Cannam, Matthias Mauch|
|DK4-DK9||Junqi Deng, Yu-Kwong Kwok|
|KO1 (shineChords)||Maksim Khadkevich, Maurizio Omologo|
All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary.
Chris Cannam and Matthias Mauch have informed us that their intention was to resubmit last year's system. Unfortunately, a small change that wasn't supposed to affect the output did introduce a serious bug, which they only realised after seeing these results. They want the community to know that last year's results are more representative of the capabilities of their system.
An analysis of the statistical difference between all submissions can be found on the following pages:
More details about the performance of the algorithms, including per-song performance and supplementary statistics, are available from these archives:
The raw output of the algorithms are available in the archives below. This can be used to experiment with alternative evaluation measures and statistics.