Difference between revisions of "2015:Audio Chord Estimation Results"

From MIREX Wiki
m (External link syntax cleanup)
m (Fix latest edit)
Line 1: Line 1:
 
==Introduction==
 
==Introduction==
  
This page contains the results of the 2015 edition of the MIREX automatic chord estimation tasks. This edition was the third one since the reorganization of the evaluation procedure in 2013. The results can therefore be directly compared to those of the last two years. Chord labels are evaluated according to five different chord vocabularies and the segmentation is also assessed. Additional information about the used measures can be found on the page of the [[2013:Audio_Chord_Estimation_Results_MIREX_2009 2013 edition]].  
+
This page contains the results of the 2015 edition of the MIREX automatic chord estimation tasks. This edition was the third one since the reorganization of the evaluation procedure in 2013. The results can therefore be directly compared to those of the last two years. Chord labels are evaluated according to five different chord vocabularies and the segmentation is also assessed. Additional information about the used measures can be found on the page of the [[2013:Audio_Chord_Estimation_Results_MIREX_2009 | 2013 edition]].  
  
 
==What’s new?==
 
==What’s new?==
* A new data set, called "JayChou 2015" has been donated by [http://tangkk.net Junqi Deng] of the [http://www.hku.hk University of Hong Kong]. It consists of 29 Mandopop songs taken from various albums by [https://en.wikipedia.org/wiki/Jay_Chou | Jay Chou]. Most of the songs are ballads and special attention has been paid to the annotation of extended chords and inversions. Because Junqi was kind enough to provide this set before official publication, the algorithmic output on these files and their ground-truth has been withheld for the time being. Also the file names in the per track results have been anonymized.
+
* A new data set, called "JayChou 2015" has been donated by [http://tangkk.net Junqi Deng] of the [http://www.hku.hk University of Hong Kong]. It consists of 29 Mandopop songs taken from various albums by [https://en.wikipedia.org/wiki/Jay_Chou Jay Chou]. Most of the songs are ballads and special attention has been paid to the annotation of extended chords and inversions. Because Junqi was kind enough to provide this set before official publication, the algorithmic output on these files and their ground-truth has been withheld for the time being. Also the file names in the per track results have been anonymized.
 
* The algorithmic output and per track results on the Isophonics set now display the unmasked song name, such that an evaluation per artist/album can be performed.
 
* The algorithmic output and per track results on the Isophonics set now display the unmasked song name, such that an evaluation per artist/album can be performed.
  

Revision as of 13:45, 21 October 2015

Introduction

This page contains the results of the 2015 edition of the MIREX automatic chord estimation tasks. This edition was the third one since the reorganization of the evaluation procedure in 2013. The results can therefore be directly compared to those of the last two years. Chord labels are evaluated according to five different chord vocabularies and the segmentation is also assessed. Additional information about the used measures can be found on the page of the 2013 edition.

What’s new?

  • A new data set, called "JayChou 2015" has been donated by Junqi Deng of the University of Hong Kong. It consists of 29 Mandopop songs taken from various albums by Jay Chou. Most of the songs are ballads and special attention has been paid to the annotation of extended chords and inversions. Because Junqi was kind enough to provide this set before official publication, the algorithmic output on these files and their ground-truth has been withheld for the time being. Also the file names in the per track results have been anonymized.
  • The algorithmic output and per track results on the Isophonics set now display the unmasked song name, such that an evaluation per artist/album can be performed.

Software

All software used for the evaluation has been made open-source. The evaluation framework is described by Pauwels and Peeters (2013). The corresponding code repository can be found on GitHub and the used measures are available as presets. The raw algorithmic output provided below makes it possible to calculate the additional measures from the paper (separate results for tetrads, etc.), in addition to those presented below. More help can be found in the readme.

The statistical comparison between the different submissions is explained in Burgoyne et al. (2014). The software is available at BitBucket. It uses the detailed results provided below as input.

Submissions

Abstract Contributors
CM3 (Chordino) PDF Chris Cannam, Matthias Mauch
DK4-DK9 PDF Junqi Deng, Yu-Kwong Kwok
KO1 (shineChords) PDF Maksim Khadkevich, Maurizio Omologo

Results

Summary

All figures can be interpreted as percentages and range from 0 (worst) to 100 (best). The table is sorted on WCSR for the major-minor vocabulary.

Isophonics 2009
Algorithm Root MajMin MajMinInv Sevenths SeventhsInv MeanSeg UnderSeg OverSeg
CM3 65.81 54.65 47.73 19.29 16.17 82.00 85.27 80.52
DK4 69.45 67.66 64.61 59.56 56.92 85.21 82.13 90.69
DK5 75.73 73.51 68.87 63.74 59.72 85.30 81.51 91.57
DK6 78.26 75.53 63.56 64.70 54.01 85.66 82.95 90.56
DK7 78.65 75.89 70.38 58.37 53.53 85.69 82.98 90.58
DK8 78.65 75.89 64.77 66.89 56.94 85.66 82.95 90.56
DK9 79.03 76.85 74.47 68.11 66.08 85.47 81.83 91.52
KO1 82.93 82.19 79.61 76.04 73.43 87.69 85.66 91.24

download these results as csv

Billboard 2012
Algorithm Root MajMin MajMinInv Sevenths SeventhsInv MeanSeg UnderSeg OverSeg
CM3 59.77 47.86 42.02 19.72 16.62 78.78 83.35 76.50
DK4 66.72 65.49 62.58 51.59 49.42 83.21 80.52 87.85
DK5 71.12 69.98 66.51 55.43 52.48 82.97 79.25 89.00
DK6 73.51 71.37 61.45 56.36 48.67 83.38 80.91 87.71
DK7 73.90 71.77 66.82 54.42 50.64 83.43 80.89 87.83
DK8 73.90 71.77 62.55 54.51 47.61 83.37 80.82 87.80
DK9 74.25 73.03 71.64 55.78 54.58 83.10 79.49 88.87
KO1 77.45 75.58 73.51 57.68 55.82 84.16 82.80 87.44

download these results as csv

Billboard 2013
Algorithm Root MajMin MajMinInv Sevenths SeventhsInv MeanSeg UnderSeg OverSeg
CM3 57.84 44.87 39.37 17.04 14.42 77.87 81.21 77.39
DK4 61.07 57.98 55.85 45.88 44.17 80.51 77.61 87.41
DK5 66.75 63.17 60.39 49.69 47.37 79.86 75.75 88.74
DK6 70.32 65.37 55.99 51.73 44.51 80.75 78.10 87.25
DK7 70.76 65.80 61.04 49.40 45.72 80.82 78.15 87.31
DK8 70.76 65.80 57.06 50.24 43.53 80.71 78.10 87.22
DK9 70.02 66.08 64.66 50.01 48.80 80.24 76.25 88.60
KO1 75.36 71.39 69.43 53.57 51.78 81.63 79.61 87.75

download these results as csv

JayChou 2015
Algorithm Root MajMin MajMinInv Sevenths SeventhsInv MeanSeg UnderSeg OverSeg
CM3 54.32 43.82 34.92 20.57 16.54 81.43 84.40 79.19
DK4 75.70 76.06 72.86 63.66 61.33 86.85 83.31 91.28
DK5 76.79 77.03 73.10 64.43 61.39 87.04 83.24 91.78
DK6 72.71 72.44 64.66 57.65 51.33 86.80 83.65 90.83
DK7 73.38 73.15 66.78 53.59 48.59 86.82 83.67 90.84
DK8 73.38 73.15 65.43 55.20 48.96 86.83 83.70 90.83
DK9 75.78 75.23 70.29 61.48 57.55 86.80 82.96 91.64
KO1 78.73 77.69 66.87 54.16 44.55 88.46 87.12 90.11

download these results as csv

Comparative Statistics

An analysis of the statistical difference between all submissions can be found on the following pages:

Detailed Results

More details about the performance of the algorithms, including per-song performance and supplementary statistics, are available from these archives:

Algorithmic Output

The raw output of the algorithms are available in the archives below. This can be used to experiment with alternative evaluation measures and statistics.