Difference between revisions of "2007:Audio Chord Detection"
(→potential participants) |
|||
Line 54: | Line 54: | ||
Christian Dittmar (Fraunhofer IDMT, Ilmenau, Germany) [mailto:dmr@idmt.fraunhofer.de] | Christian Dittmar (Fraunhofer IDMT, Ilmenau, Germany) [mailto:dmr@idmt.fraunhofer.de] | ||
− | == | + | == Potential Participants == |
− | --[[User:Bfields|Bfields]] | + | --[[User:Bfields|Bfields]] |
== Bibliography == | == Bibliography == |
Revision as of 09:36, 22 April 2007
Introduction
For many applications in music information retrieval, extracting the harmonic structure is very desirable, for example for segmenting pieces into characteristic segments, for finding similar pieces, or for semantic analysis of music.
The extraction of the harmonic structure requires the detection of as many chords as possible in a piece. That includes the characterisation of chords with a key and type as well as a chronological order with onset and duration of the chords.
Although some publications are available on this topic [1,2,3,4,5], comparison of the results is difficult, because different measures are used to assess the performance. To overcome this problem an accurately defined methodology is needed. This includes a repertory of the findable chords, a defined test set along with ground truth and unambiguous calculation rules to measure the performance.
Regarding this we suggest to introduced the new evaluation task Audio Chord Detection.
Data
As this is intended for music information retrieval, the analysis should be performed on real world audio, not resynthesized MIDI or special renditions of single chords. We suggest the test bed consists of WAV-files in CD quality (with a sampling rate of 44,1kHz and a solution of 16 bit). A representative test bed should consist of more than 50 songs of different genres like pop, rock, jazz and so on.
For each song in the test bed, a ground truth is needed. This should comprise all detectable chords in this piece with their tonic, type and temporal position (onset and duration) in a machine readable format that is still to be specified.
To define the ground truth, a set of detectable chords has to be identified. We propose to use the following set of chords build upon each of the twelve semitones.
Triads: major, minor, diminished, augmented, suspended4 Quads: major maj7, major 7, major add9, major maj7/#5 minor maj7, minor 7, minor add9, minor 7/b5 maj7/sus4, 7/sus4
An approach for text annotation of musical chords is presented in [6].
We could contribute excerpts of approximately 30 pop and rock songs including a ground truth.
Evaluation
Two common measures from field of information retrieval are recall and precision. They can be used to evaluate a chord detection system.
Recall: number of time units where the chords have been correctly identified by the algorithm divided by the number of time units which contain detectable chords in the ground truth.
Precision: number of time units where the chords have been correctly identified by the algorithm divided by the total number of time units where the algorithm detected a chord event.
Points to discuss:
- What temporal resolution should be used for ground truth and results?
- How should enharmonic and other confusions of chords be handled?
- What is the maximal acceptable onset deviation between ground truth and result?
- What file format should be used for ground truth and output?
Moderators
Katja Rosenbauer (Fraunhofer IDMT, Ilmenau, Germany) [1]
Christian Dittmar (Fraunhofer IDMT, Ilmenau, Germany) [2]
Potential Participants
--Bfields
Bibliography
1.Harte,C.A. and Sandler,M.B.(2005). Automatic chord identification using a quantised chromagram. Proceedings of 118th Audio Engineering Society's Convention.
2.Sailer,C. and Rosenbauer K.(2006). A bottom-up approach to chord detection. Proceedings of International Computer Music Conference 2006.
3.Shenoy,A. and Wang,Y.(2005). Key, chord, and rythm tracking of popular music recordings. Computer Music Journal 29(3), 75-86.
4.Sheh,A. and Ellis,D.P.W.(2003). Chord segmentation and recognition using em-trained hidden markov models. Proceedings of 4th International Conference on Music Information Retrieval.
5.Yoshioka,T. et al.(2004). Automatic Chord Transcription with concurrent recognition of chord symbols and boundaries. Proceedings of 5th International Conference on Music Information Retrieval.
6.Harte,C. and Sandler,M. and Abdallah,S. and G├│mez,E.(2005). Symbolic representation of musical chords: a proposed syntax for text annotations. Proceedings of 6th International Conference on Music Information Retrieval.