Difference between revisions of "2005:Audio Key Finding"

From MIREX Wiki
m (30 revisions)
 
(24 intermediate revisions by 5 users not shown)
Line 13: Line 13:
 
Determination of the key is a prerequisite for any analysis of tonal music. As a result, extensive work has been done in the area of automatic key detection. However, among this plethora of key finding algorithms, what seems to be lacking is a formal and extensive evaluation process. We propose the evaluation of key-finding algorithms at the 2005 MIREX.
 
Determination of the key is a prerequisite for any analysis of tonal music. As a result, extensive work has been done in the area of automatic key detection. However, among this plethora of key finding algorithms, what seems to be lacking is a formal and extensive evaluation process. We propose the evaluation of key-finding algorithms at the 2005 MIREX.
  
There are significant contributions in the area of key finding for both audio and symbolic representation. Thus another the same contest was also proposed for MIDI data. Algorithms that determine the key from audio should be robust enough to handle frequency interferences and harmonic effects caused by the use of multiple instruments.  
+
There are significant contributions in the area of key finding for both audio and symbolic representation. This evaluation process should consider algorithms in both areas. Algorithms that determine the key from audio should be robust enough to handle frequency interferences and harmonic effects caused by the use of multiple instruments.
  
  
 
==Potential Participants==
 
==Potential Participants==
  
 +
'''Audio Key-Finding''':
 
* Emilia G├│mez (egomez@iua.upf.es) and Perfecto Herrera (perfecto.herrera@iua.upf.es): [high].
 
* Emilia G├│mez (egomez@iua.upf.es) and Perfecto Herrera (perfecto.herrera@iua.upf.es): [high].
 
* Steffen Pauws (steffen.pauws@philips.com): [high].
 
* Steffen Pauws (steffen.pauws@philips.com): [high].
Line 23: Line 24:
 
* Ozgur Izmirli (oizm@conncoll.edu): [moderate].
 
* Ozgur Izmirli (oizm@conncoll.edu): [moderate].
 
* Yongwei Zhu (ywzhu@i2r.a-start.edu.sg) and Mohan Kankanhalli (mohan@comp.nus.edu.sg): [unknown].
 
* Yongwei Zhu (ywzhu@i2r.a-start.edu.sg) and Mohan Kankanhalli (mohan@comp.nus.edu.sg): [unknown].
 +
* Hendrik Purwins (hendrik@cs.tu-berlin.de): [unknown].
 +
 +
 +
'''Symbolic Key-Finding''':
 +
* Olli Yli-Harja (yliharja@cs.tut.fi), Ilya Schmulevich (is@ieee.org), and Kjell Lemstr├╢m (kjell.lemstrom@cs.helsinki.fi): [high].
 +
* Tuomas Eerola (ptee@cc.jyu.fi) and Petri Toiviainen (ptoiviai@cc.jyu.fi): [high].
 +
* Arpi Mardirossian (mardiros@usc.edu) and Elaine Chew (echew@usc.edu): [high].
 +
* Craig Sapp (craig@ccrma.stanford.edu): [moderate].
 +
* David Temperley (dtemp@theory.esm.rochester.edu): [unknown].
 +
  
 
==Evaluation Procedures==
 
==Evaluation Procedures==
Line 35: Line 46:
 
==Relevant Test Collections==
 
==Relevant Test Collections==
  
Audio data can be obtained from HNH Hong Kong International, Ltd. (http://www.naxos.com), if the agreement with the company is now in effect for MIR testing. We have determined that only fifteen to thirty second excerpts may be sufficient for key finding using audio data. Copyright regulations state that up to 33% of audio files may be copied without any violations of such regulations. This is advantageous since fifteen to thirty second excerpts will be well within this limit.
+
'''Audio Data''': Audio data can be obtained from HNH Hong Kong International, Ltd. (http://www.naxos.com), if the agreement with the company is now in effect for MIR testing. We have determined that only fifteen to thirty second excerpts may be sufficient for key finding using audio data. Copyright regulations state that up to 33% of audio files may be copied without any violations of such regulations. This is advantageous since fifteen to thirty second excerpts will be well within this limit.
 +
 
 +
 
 +
'''MIDI Collections''': MIDI data are a symbolic representation of music. It provides a numeric representation of the pitch and onset/offset time and velocity for every event in a musical piece. Classical Archive website (http://www.classicalarchives.com) provides more than thirty thousands full length classical music files by more than two thousands composers in MIDI format. All the files are presented with full name, and composer. Also, most of files state the key clearly. Music by different composers may be used to test the range of the algorithm. Multiple versions of a piece may be used to test the algorithmsΓÇÖ robustness to the various arrangements of instruments.
 +
 
 +
 
 +
'''Score-based Collections''': Score-based data are also symbolic representations of music. In addition to numeric event information, it also provides further pitch and time structure information such as contextually correct note names, and key and time signatures.  MusData (http://www.musedata.org), for example, provides access to such a score-based collection.
  
  
Line 42: Line 59:
 
The proposals contemplate two different evaluations for key estimation: one for MIDI and another one for Audio Data. Maybe these two proposals could be merged in a single one. At least part of the data could be shared among done by having a test collection including Audio Data and its MIDI representation, or MIDI representation and the Audio generated by a MIDI synthesizer. This way, we could evaluate and compare approaches dealing with MIDI & Audio.
 
The proposals contemplate two different evaluations for key estimation: one for MIDI and another one for Audio Data. Maybe these two proposals could be merged in a single one. At least part of the data could be shared among done by having a test collection including Audio Data and its MIDI representation, or MIDI representation and the Audio generated by a MIDI synthesizer. This way, we could evaluate and compare approaches dealing with MIDI & Audio.
  
Regarding the key estimation contest from audio data, it seems that only classical music is considered. It would be possible to generalize to some other styles? For instance popular music which key is known.
+
[Arpi 02.08.05]: We agree with this and believe that the best approach would be to synthesize audio data from MIDI.
 +
 
 +
 
 +
Regarding the key estimation contest from audio data, it seems that only classical music is considered. It would be possible to generalize to some other styles? For instance popular music which key is known.
 +
 
 +
[Arpi 02.08.05]: Having test data from a variety of genres would be ideal. The advantage of classical music is that many pieces are labeled with the key name. We welcome suggestions on finding labeled music in other genres.
 +
 
 +
 
 +
Regarding evaluation measures for audio data, it is said that "Keys will be considered as 'close' if they have one of the following relationships: distance of perfect fifth, relative major and minor, and parallel major and minor".
 +
 
 +
[Chinghua 02.10.05]: Those relationships can be considered as the key close to the main key, still they are not the main key. But if the algorithm give those answers, it does achieve some points. So I suggest that we may give multiple levels of scores to the different answers. For example, the main key gets the whole points (may be 5), the perfect fifth gets 75% or 80% of the whole point (may be 3), and so on.
 +
 
 +
 
 +
What about tuning errors? In the case of audio, there are different tuning systems that can be used. The detection algorithm should be able to estimate where the key is "tuned" (A 440 or 442,...). Keys should be also considered as 'close' if they have a relationship of "1 semitone", to consider this difference between real key (according to its tuning) & labelled key (A major). In the case of MIDI, this problem does not appear.
 +
 
 +
[Chinghua 02.10.05]: Since we will use MIDI synthesizer to generate the audio, the tuning won't be a serious problem. The detection algorithm should have the ability to regard both 440 and 442 Hz as pitch A. If the original piece is written in A Major but the arrangement of MIDI shifted a half step down to Ab Major, then the algorithm (both MIDI and Audio part) should detect it as Ab Major instead of A Major.
  
Regarding evaluation measures for audio data, it is said that "Keys will be considered as 'close' if they have one of the following relationships: distance of perfect fifth, relative major and minor, and parallel major and minor". What about tuning errors? In the case of audio, there are different tuning systems that can be used. The detection algorithm should be able to estimate where the key is "tuned" (A 440 or 442,...). Keys should be also considered as 'close' if they have a relationship of "1 semitone", to consider this difference between real key (according to its tuning) & labelled key (A major). In the case of MIDI, this problem does not appear.
 
  
 
Will it be some training data, so that participants can try their algorithms?
 
Will it be some training data, so that participants can try their algorithms?
 +
 +
[Arpi 02.08.05]: Great idea!
 +
 +
[Chinghua 02.10.05]: Some data will be provided for participants to verify their algorithms, but may be just a few pieces. Since different systems may need different amount of data for training, the participants need to find a good training data set for their own systems. Participants can use the provided data to train their systems, but the quantity and quality of the data will not be guaranteed to be good for their training purpose.
 +
  
 
I cannot tell whether the suggested participants are willing to participate. Other potential candidate could be: Hendrik Purwins
 
I cannot tell whether the suggested participants are willing to participate. Other potential candidate could be: Hendrik Purwins
 +
 +
[Arpi 02.08.05]: Good addition. We have added him to the list of possible participants.
  
  
Line 56: Line 94:
 
Title: Evaluation of Key Finding Algorithms Using Audio Data or Evaluation of Key Finding Algorithms Part 1
 
Title: Evaluation of Key Finding Algorithms Using Audio Data or Evaluation of Key Finding Algorithms Part 1
 
Description Paragraph:  Par 2, Line 2 - sentence requires correction
 
Description Paragraph:  Par 2, Line 2 - sentence requires correction
 +
 +
[Arpi 02.08.05]: Thank you. This has been corrected.
  
 
The problem is well defined and the mentioned possible participants seem likely to participate.
 
The problem is well defined and the mentioned possible participants seem likely to participate.
 +
  
 
Regarding the evaluation procedures, length of input excerpt would have to be determined (15 to 30 seconds - any studies on the ideal length?)
 
Regarding the evaluation procedures, length of input excerpt would have to be determined (15 to 30 seconds - any studies on the ideal length?)
 +
 +
[Arpi 02.08.05]: We would like to receive further input in regards to this. We are open to using the entire piece or an excerpt (i.e. 15, 30 seconds).
 +
 +
 
Assumption of closeness:
 
Assumption of closeness:
 
* Perfect 5th: Is this generally accepted as an almost similar key?
 
* Perfect 5th: Is this generally accepted as an almost similar key?
 +
 +
[Arpi 02.08.05]: Yes it is. Please refer to http://www-rcf.usc.edu/~echew/papers/CiM2003 for further details.
 +
 +
[EC 02.08.05]: Keys a perfect fifth apart share all but one pitch (with the differing pitches being only one half step apart).  The above paper describes three models for tonality (by Krumhansl, Lerdahl and Chew) with similar relative distances between keys which are consistent with that mentioned in our proposal.
 +
 
* Parallel major or minor: Not too certain if this needs to be clarified (Ignore this comment if this is generally understood by the majority working in this field)  
 
* Parallel major or minor: Not too certain if this needs to be clarified (Ignore this comment if this is generally understood by the majority working in this field)  
 +
 +
 
Based on the error analysis approach outlined, would the algorithm that performs best with the new parameter settings be considered superior ?
 
Based on the error analysis approach outlined, would the algorithm that performs best with the new parameter settings be considered superior ?
 +
 +
[Arpi 02.08.05]: Key finding and its evaluation is a complex matter. This is a good question to which there is no straightforward answer. We would like to explore the definition of algorithm superiority further. Input from participants would be valuable.
 +
  
 
The test data are relevant. Are there any alternative data sets if the Naxos collection does not become available?
 
The test data are relevant. Are there any alternative data sets if the Naxos collection does not become available?
 +
 +
[Arpi 02.08.05]: The Naxos collection only contains audio data. We propose using MIDI data and audio synthesized from MIDI. Please refer to comments made in Review 1.
 +
 +
 +
==Downie's Comments==
 +
 +
1. Am intrigued  and heartened by the fact that both an audio and a symbolic version of the task has been proposed.
 +
 +
2. The modality question does arise and like Review #2, I would like to understand better the gradations of "failure" (i.e., the Perfect 5th issue), etc.
 +
 +
3. I would very much like to see a direct tie in with symbolic and audio data (i.e., a one-to-one match of score with audio), if possible.
 +
 +
4. Wonder if we could frame this for evaluation purposes as a more traditional IR task? For example, Find all pieces in Key X...find all pieces in a minor mode.....and the kicker...find all pieces transposed from their original keys!
 +
 +
[Arpi 02.08.05]: This is a great idea. This approach will certainly give us new metrics. We can further explore this if time permits.
 +
 +
 +
==Emmanuel's Comments==
 +
 +
I was the one to decide that the original proposal on key finding should be split into two proposals on audio key finding and symbolic key finding. Indeed the audio and symbolic parts involve completely separate data and separate participants. From the committee point of view, this needs as much annotation and testing work as two independent proposals. I did not ask the authors about it, so it's not their fault.
 +
 +
I am strongly in favor of merging the two proposals into a single one again. But then the symbolic and audio data need to correspond to the same titles as much as possible, so that the performances can be compared. Can the RWC database or another database be used for it ? Also the participants need to submit algorithms for both tasks if possible. I suppose it won't be too hard for audio key finding algorithms to work also on symbolic data, since audio data may be easily synthesized from symbolic data using a conventional midi synthesizer.
 +
 +
 +
==Arpi's Comments==
 +
 +
As Emmanuel stated, we submitted a single proposal for audio and symbolic key-finding. We have now re-combined the two proposals. Please refer to Emmanuels comments for further details.

Latest revision as of 17:18, 9 May 2010

Proposer

Arpi Mardirossian, Ching-Hua Chuan and Elaine Chew (University of Southern California) mardiros@usc.edu


Title

Evaluation of Key Finding Algorithms


Description

Determination of the key is a prerequisite for any analysis of tonal music. As a result, extensive work has been done in the area of automatic key detection. However, among this plethora of key finding algorithms, what seems to be lacking is a formal and extensive evaluation process. We propose the evaluation of key-finding algorithms at the 2005 MIREX.

There are significant contributions in the area of key finding for both audio and symbolic representation. This evaluation process should consider algorithms in both areas. Algorithms that determine the key from audio should be robust enough to handle frequency interferences and harmonic effects caused by the use of multiple instruments.


Potential Participants

Audio Key-Finding:

  • Emilia G├│mez (egomez@iua.upf.es) and Perfecto Herrera (perfecto.herrera@iua.upf.es): [high].
  • Steffen Pauws (steffen.pauws@philips.com): [high].
  • Ching-Hua Chuan (chinghuc@usc.edu) and Elaine Chew (echew@usc.edu): [high].
  • Ozgur Izmirli (oizm@conncoll.edu): [moderate].
  • Yongwei Zhu (ywzhu@i2r.a-start.edu.sg) and Mohan Kankanhalli (mohan@comp.nus.edu.sg): [unknown].
  • Hendrik Purwins (hendrik@cs.tu-berlin.de): [unknown].


Symbolic Key-Finding:

  • Olli Yli-Harja (yliharja@cs.tut.fi), Ilya Schmulevich (is@ieee.org), and Kjell Lemstr├╢m (kjell.lemstrom@cs.helsinki.fi): [high].
  • Tuomas Eerola (ptee@cc.jyu.fi) and Petri Toiviainen (ptoiviai@cc.jyu.fi): [high].
  • Arpi Mardirossian (mardiros@usc.edu) and Elaine Chew (echew@usc.edu): [high].
  • Craig Sapp (craig@ccrma.stanford.edu): [moderate].
  • David Temperley (dtemp@theory.esm.rochester.edu): [unknown].


Evaluation Procedures

The following evaluation outline is a general guideline that will be compatible with both audio and symbolic key finding algorithms. It is safe to assume that each key finding algorithm will have its own set of parameters. The creators of the system should pre-determine the optimal settings for the parameters. Once these settings are determined, an accuracy rate may be calculated. The input of the test should be some excerpt of the pieces in the test set and the output will be the key name, for example, C major or E flat minor. We plan to use pieces for which the keys are known, for example, symphonies and concertos by well-known composers where the keys are stated in the title of the piece. The excerpt will typically be the beginnings of the pieces as this is the only part of the piece for which establishing of the global and known key can be guaranteed.

The error analysis will center on comparing the key identified by the algorithm to the actual key of the piece. We will then determine how 'close' each identified key is to the corresponding correct key. Keys will be considered as 'close' if they have one of the following relationships: distance of perfect fifth, relative major and minor, and parallel major and minor. It can be assumed that if an algorithm returns a key that is closely related to the actual key then it is superior. We may then use this information to generate further metrics.

Clearly, the optimal parameters may vary for different styles of music, and by composer. If time permits and the systems allow, we may next focus on pieces for which the algorithm has identified an incorrect key under the optimal settings of the parameters and determine whether the incorrect assignments were due to improper parameter selection. We may then calculate the percent of the pieces that had an incorrect assignment under the optimal settings but have a correct assignment with other settings.


Relevant Test Collections

Audio Data: Audio data can be obtained from HNH Hong Kong International, Ltd. (http://www.naxos.com), if the agreement with the company is now in effect for MIR testing. We have determined that only fifteen to thirty second excerpts may be sufficient for key finding using audio data. Copyright regulations state that up to 33% of audio files may be copied without any violations of such regulations. This is advantageous since fifteen to thirty second excerpts will be well within this limit.


MIDI Collections: MIDI data are a symbolic representation of music. It provides a numeric representation of the pitch and onset/offset time and velocity for every event in a musical piece. Classical Archive website (http://www.classicalarchives.com) provides more than thirty thousands full length classical music files by more than two thousands composers in MIDI format. All the files are presented with full name, and composer. Also, most of files state the key clearly. Music by different composers may be used to test the range of the algorithm. Multiple versions of a piece may be used to test the algorithmsΓÇÖ robustness to the various arrangements of instruments.


Score-based Collections: Score-based data are also symbolic representations of music. In addition to numeric event information, it also provides further pitch and time structure information such as contextually correct note names, and key and time signatures. MusData (http://www.musedata.org), for example, provides access to such a score-based collection.


Review 1

The proposals contemplate two different evaluations for key estimation: one for MIDI and another one for Audio Data. Maybe these two proposals could be merged in a single one. At least part of the data could be shared among done by having a test collection including Audio Data and its MIDI representation, or MIDI representation and the Audio generated by a MIDI synthesizer. This way, we could evaluate and compare approaches dealing with MIDI & Audio.

[Arpi 02.08.05]: We agree with this and believe that the best approach would be to synthesize audio data from MIDI.


Regarding the key estimation contest from audio data, it seems that only classical music is considered. It would be possible to generalize to some other styles? For instance popular music which key is known.

[Arpi 02.08.05]: Having test data from a variety of genres would be ideal. The advantage of classical music is that many pieces are labeled with the key name. We welcome suggestions on finding labeled music in other genres.


Regarding evaluation measures for audio data, it is said that "Keys will be considered as 'close' if they have one of the following relationships: distance of perfect fifth, relative major and minor, and parallel major and minor".

[Chinghua 02.10.05]: Those relationships can be considered as the key close to the main key, still they are not the main key. But if the algorithm give those answers, it does achieve some points. So I suggest that we may give multiple levels of scores to the different answers. For example, the main key gets the whole points (may be 5), the perfect fifth gets 75% or 80% of the whole point (may be 3), and so on.


What about tuning errors? In the case of audio, there are different tuning systems that can be used. The detection algorithm should be able to estimate where the key is "tuned" (A 440 or 442,...). Keys should be also considered as 'close' if they have a relationship of "1 semitone", to consider this difference between real key (according to its tuning) & labelled key (A major). In the case of MIDI, this problem does not appear.

[Chinghua 02.10.05]: Since we will use MIDI synthesizer to generate the audio, the tuning won't be a serious problem. The detection algorithm should have the ability to regard both 440 and 442 Hz as pitch A. If the original piece is written in A Major but the arrangement of MIDI shifted a half step down to Ab Major, then the algorithm (both MIDI and Audio part) should detect it as Ab Major instead of A Major.


Will it be some training data, so that participants can try their algorithms?

[Arpi 02.08.05]: Great idea!

[Chinghua 02.10.05]: Some data will be provided for participants to verify their algorithms, but may be just a few pieces. Since different systems may need different amount of data for training, the participants need to find a good training data set for their own systems. Participants can use the provided data to train their systems, but the quantity and quality of the data will not be guaranteed to be good for their training purpose.


I cannot tell whether the suggested participants are willing to participate. Other potential candidate could be: Hendrik Purwins

[Arpi 02.08.05]: Good addition. We have added him to the list of possible participants.


Review 2

General comments: Title: Evaluation of Key Finding Algorithms Using Audio Data or Evaluation of Key Finding Algorithms Part 1 Description Paragraph: Par 2, Line 2 - sentence requires correction

[Arpi 02.08.05]: Thank you. This has been corrected.

The problem is well defined and the mentioned possible participants seem likely to participate.


Regarding the evaluation procedures, length of input excerpt would have to be determined (15 to 30 seconds - any studies on the ideal length?)

[Arpi 02.08.05]: We would like to receive further input in regards to this. We are open to using the entire piece or an excerpt (i.e. 15, 30 seconds).


Assumption of closeness:

  • Perfect 5th: Is this generally accepted as an almost similar key?

[Arpi 02.08.05]: Yes it is. Please refer to http://www-rcf.usc.edu/~echew/papers/CiM2003 for further details.

[EC 02.08.05]: Keys a perfect fifth apart share all but one pitch (with the differing pitches being only one half step apart). The above paper describes three models for tonality (by Krumhansl, Lerdahl and Chew) with similar relative distances between keys which are consistent with that mentioned in our proposal.

  • Parallel major or minor: Not too certain if this needs to be clarified (Ignore this comment if this is generally understood by the majority working in this field)


Based on the error analysis approach outlined, would the algorithm that performs best with the new parameter settings be considered superior ?

[Arpi 02.08.05]: Key finding and its evaluation is a complex matter. This is a good question to which there is no straightforward answer. We would like to explore the definition of algorithm superiority further. Input from participants would be valuable.


The test data are relevant. Are there any alternative data sets if the Naxos collection does not become available?

[Arpi 02.08.05]: The Naxos collection only contains audio data. We propose using MIDI data and audio synthesized from MIDI. Please refer to comments made in Review 1.


Downie's Comments

1. Am intrigued and heartened by the fact that both an audio and a symbolic version of the task has been proposed.

2. The modality question does arise and like Review #2, I would like to understand better the gradations of "failure" (i.e., the Perfect 5th issue), etc.

3. I would very much like to see a direct tie in with symbolic and audio data (i.e., a one-to-one match of score with audio), if possible.

4. Wonder if we could frame this for evaluation purposes as a more traditional IR task? For example, Find all pieces in Key X...find all pieces in a minor mode.....and the kicker...find all pieces transposed from their original keys!

[Arpi 02.08.05]: This is a great idea. This approach will certainly give us new metrics. We can further explore this if time permits.


Emmanuel's Comments

I was the one to decide that the original proposal on key finding should be split into two proposals on audio key finding and symbolic key finding. Indeed the audio and symbolic parts involve completely separate data and separate participants. From the committee point of view, this needs as much annotation and testing work as two independent proposals. I did not ask the authors about it, so it's not their fault.

I am strongly in favor of merging the two proposals into a single one again. But then the symbolic and audio data need to correspond to the same titles as much as possible, so that the performances can be compared. Can the RWC database or another database be used for it ? Also the participants need to submit algorithms for both tasks if possible. I suppose it won't be too hard for audio key finding algorithms to work also on symbolic data, since audio data may be easily synthesized from symbolic data using a conventional midi synthesizer.


Arpi's Comments

As Emmanuel stated, we submitted a single proposal for audio and symbolic key-finding. We have now re-combined the two proposals. Please refer to Emmanuels comments for further details.