Difference between revisions of "2005:Audio Tempo Extraction"
138.37.33.58 (talk | contribs) (→Potential Participants) |
m (Reverted edit of Buba37, changed back to last version by Kriswest) |
||
(55 intermediate revisions by 9 users not shown) | |||
Line 1: | Line 1: | ||
+ | ==''*Training Data Set*''== | ||
+ | Download (Windows) : https://www.music-ir.org/evaluation/MIREX/data/tempo_contest_data/train.zip.1 | ||
+ | |||
+ | Martin McKinney's README comments: https://www.music-ir.org/evaluation/MIREX/data/tempo_contest_data/README.txt | ||
+ | |||
==Proposer== | ==Proposer== | ||
− | Martin F. McKinney (Philips) | + | Martin F. McKinney (Philips) mckinney@alum.mit.edu |
+ | Dirk Moelants (IPEM, Ghent University) dirk@moelants.net | ||
==Title== | ==Title== | ||
Line 21: | Line 27: | ||
A more pragmatic reason for using perceptual tempo rather than notated tempo as a ground truth for our contest is that we simply do not have the notated tempo of our test set. If we notate it by having a panel of expert listeners tap along and label the excerpts, we are by default dealing with the perceived tempo. The handling of this data as ground truth must be done with care. | A more pragmatic reason for using perceptual tempo rather than notated tempo as a ground truth for our contest is that we simply do not have the notated tempo of our test set. If we notate it by having a panel of expert listeners tap along and label the excerpts, we are by default dealing with the perceived tempo. The handling of this data as ground truth must be done with care. | ||
+ | ==Participants== | ||
+ | * Miguel Alonso (ENST), miguel.alonso@enst.fr | ||
+ | * George Tzanetakis (University of Victoria), gtzan@cs.uvic.ca | ||
+ | * Matthew Davies and Paul Brossier (Queen Mary, University of London), matthew.davies@elec.qmul.ac.uk, paul.brossier@elec.qmul.ac.uk | ||
+ | * Bill Sethares (University of Wisconsin-Madison), sethares@ece.wisc.edu | ||
+ | * Fabien Gouyon (University Pompeu Fabra) and Simon Dixon (OFAI), fgouyon@iua.upf.es, simon@oefai.at | ||
+ | * Christian Uhle (Fraunhofer Institut), uhle@idmt.fhg.de | ||
+ | * Geoffroy Peeters (IRCAM), peeters@ircam.fr | ||
+ | * Douglas Eck (University of Montreal), eckdoug@iro.umontreal.ca | ||
− | ==Potential Participants== | + | ==Other Potential Participants== |
− | + | * Anssi Klapuri (Tampere University of Technology), klap@cs.tut.fi | |
− | * | + | * Werner van Belle (werner.van.belle@itek.norut.no) |
− | |||
− | |||
− | |||
− | * | ||
− | |||
==Evaluation Procedures== | ==Evaluation Procedures== | ||
Line 37: | Line 47: | ||
1) Perceptual tempo data collection | 1) Perceptual tempo data collection | ||
− | The following procedure is described in more detail in McKinney and Moelants (2004) and Moelants and McKinney (2004). Listeners will be asked to tap to the beat of a series of musical excerpts. Responses will be collected and their perceived tempo will be calculated. For each excerpt, a distribution of perceived tempo will be generated. A relatively simple form of perceived tempo is proposed for this contest: The two highest peaks in the perceived tempo distribution for each excerpt will be taken, along with their respective heights (normalized to sum to 1.0) as the two tempo candidates for that particular excerpt. The height of a peak in the distribution is assumed to represent the perceptual salience of that tempo. In addition to tempo, the phase and tapping times of listeners will also be recorded | + | |
+ | The following procedure is described in more detail in McKinney and Moelants (2004) and Moelants and McKinney (2004). Listeners will be asked to tap to the beat of a series of musical excerpts. Responses will be collected and their perceived tempo will be calculated. For each excerpt, a distribution of perceived tempo will be generated. A relatively simple form of perceived tempo is proposed for this contest: The two highest peaks in the perceived tempo distribution for each excerpt will be taken, along with their respective heights (normalized to sum to 1.0) as the two tempo candidates for that particular excerpt. The height of a peak in the distribution is assumed to represent the perceptual salience of that tempo. In addition to tempo, the phase and tapping times of listeners will also be recorded to evaluation of phase-locking ability of tempo-extraction algorithms. | ||
References: | References: | ||
− | McKinney, M.F. and Moelants, D. (2004), Deviations from the resonance theory of tempo induction, Conference on Interdisciplinary Musicology, Graz. URL: http://gewi.kfunigraz.ac.at/~cim04/CIM04_paper_pdf/McKinney_Moelants_CIM04_proceedings_t.pdf | + | |
− | Moelants, D. and McKinney, M.F. (2004), Tempo perception and musical content: What makes a piece slow, fast, or temporally ambiguous? International Conference on Music Perception & Cognition, Evanston, IL. URL: http://www.northwestern.edu/icmpc/proceedings/ICMPC8/PDF/AUTHOR/MP040237.PDF | + | * McKinney, M.F. and Moelants, D. (2004), Deviations from the resonance theory of tempo induction, Conference on Interdisciplinary Musicology, Graz. URL: http://gewi.kfunigraz.ac.at/~cim04/CIM04_paper_pdf/McKinney_Moelants_CIM04_proceedings_t.pdf |
+ | * Moelants, D. and McKinney, M.F. (2004), Tempo perception and musical content: What makes a piece slow, fast, or temporally ambiguous? International Conference on Music Perception & Cognition, Evanston, IL. URL: http://www.northwestern.edu/icmpc/proceedings/ICMPC8/PDF/AUTHOR/MP040237.PDF | ||
2) Evaluation of tempo extraction algorithms | 2) Evaluation of tempo extraction algorithms | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
+ | Algorithms will process musical excerpts and return the following data: Two tempos (T1 and T2, BPM, where T1 is the slower of the two tempos), relative salience/srength of T1 (ST1, normalized so that ST1 + ST2 = 1.0), and the phases of T1 and T2 (P1 and P2, sec from beginning of audio file to the first beat or an integer multiple of the beat). | ||
+ | |||
+ | * Task TT1: Ability to identify T1 to within 8% | ||
+ | * Task TT2: Ability to identify T2 to within 8% | ||
+ | * Task TT1I: Ability to identify an acceptable (see below) integer multiple/fraction of T1 to within 8% (given if Task TT1 is correct) | ||
+ | * Task TT2I: Ability to identify an acceptable (see below) integer multiple/fraction of T2 to within 8% (given if Task TT2 is correct) | ||
+ | * Task TST1: Ability to identify the relative strength of the T1 | ||
+ | * Task TP1: Ability to correctly identify phase of T1 to within 15% of the T1 beat period (N/A if Task TT1 is incorrect) | ||
+ | * Task TP2: Ability to correctly identify phase of T2 to within 15% of the T2 beat period (N/A if Task TT2 is incorrect) | ||
+ | |||
+ | Each task (except for TST1) will receive a score of 1.0 for correct evaulation, 0.0 for incorrect evaluation. For a given algorithm, the performance, P, for each audio excerpt will be given by the following equation: | ||
+ | |||
+ | P = 0.25 * TT1 + 0.25 * TT2 + 0.10 * TT1I + 0.10 * TT2I + 0.20 * (1.0 - |ST1 - ST1GT|/max(ST1,ST1GT)) + 0.05 * TP1 + 0.05 * TP2 | ||
+ | |||
+ | where ST1GT is the ground truth data for the salience of T1. Tasks TT1I and TT2I will be assumed correct if the tempo identification tasks (TT1 and TT2, respectively) are performed correctly. Acceptable integers for Tasks TT1I and TT2I will be based upon examination of the meter of individual excerpts and of the distributions of their tapped tempi (e.g., 3 and 1/3 for tertiary meters). Tasks TST1 and TP1 will be assumed incorrect if Task TT1 is performed incorrectly. Task TP2 will be assumed incorrect if Task TT2 is performed incorrectly. If the ground-truth T1 is reported on T2, it will be accepted as correct. If the ground-truth T2 is reported on T1, it will also be accepted as correct and ST2 (calculated from: ST2 = 1.0 - ST1) will be taken as the new ST1. | ||
+ | |||
+ | The algorithm with the best average P-score will win the contest. We can also analyze the scores of individual tasks as well. We will provide some measures of statistical significance to the results, most likely through bootstrapping the test data. | ||
==Relevant Test Collections== | ==Relevant Test Collections== | ||
− | + | We will use a collection of 160 musical exerpts for the evaluation procedure. 40 of the excerpts have been taken from one of our previous experiments (See McKinney/Moelants ICMPC paper above). We are currently running tapping experiments to evaluate 120 new excerpts. These new excerpts were taken from our local collections. | |
− | |||
− | |||
− | |||
− | + | Excerpts were selected to provide: | |
− | - | + | * stable tempo within each excerpt |
− | - | + | * a good distribution of tempi across excerpts |
− | - | + | * a large variety of instrumentation and beat strengths (with and without percussion) |
− | - | + | * a variation of musical styles, including many non-western styles |
+ | * the presence of non-binary meters (about 20% have a ternary element and there are a few examples with odd or changing meter). | ||
+ | |||
+ | We will provide 20 excerpts with ground truth data for participants to try/tune their algorithms before submission. The remaining 140 excerpts will be novel to all participants. | ||
+ | |||
+ | |||
+ | ---- | ||
− | |||
Concerning copyright issues: I'm not sure if there will be any issues here if all music is simply collected in one place and then the contest algorithms are run there. In addition, I've heard that it is legal to use/distribute short excerpts of recorded audio without violation. Can anyone confirm/deny or provide more info on copyright issues for short excerpts? | Concerning copyright issues: I'm not sure if there will be any issues here if all music is simply collected in one place and then the contest algorithms are run there. In addition, I've heard that it is legal to use/distribute short excerpts of recorded audio without violation. Can anyone confirm/deny or provide more info on copyright issues for short excerpts? | ||
+ | Hello,I believe the following link will clarify the copyright issues concerning music databases for research purposes.There wont be any problems for short pieces if used for research purposes without any profits being derived by the usage.......................http://www.fxpal.com/people/foote/musicr/copyright.html | ||
+ | Balaji Thoshkahna, | ||
+ | Learning systems and multimedia labs,EE, | ||
+ | Indian Institute of Science.(IISc) | ||
+ | |||
+ | ==Review 1== | ||
+ | |||
+ | I think that your proposal is clearly written and definitely appropriate for ISMIR. I agree with your justification for the analysis of perceptual tempo, especially for applications related to human interaction. However, in order to build upon last year's contest I would support the inclusion of 'phase locking' and 'tempo following' as areas to investigate under this proposal, in addition to some further consideration of the evaluation procedures. | ||
+ | |||
+ | I think your list of participants is realistic - equally I believe many of these people have published work on beat tracking as well as tempo analysis, which suggests there should be support for an expanded proposal. | ||
+ | |||
+ | In terms of the data to be analyzed, I agree that longer excerpts are necessary, especially if the proposal is to be expanded to incorporate tempo following and phase information. I wonder if it might be interesting to classify input signals not only by genre (as you suggest), but also by the presence or absence of percussion. This would be another way to demonstrate the generality of the entered algorithms, but might also provide some further insight into those signals for which the perceptual tempo is most open to subjective interpretation - put simply: is there more agreement (computationally and in annotations) when drums are present? I would also like to see some consideration given to examples that aren't in 4/4 time, as well those which are heavily syncopated (if not already present in the proposed databases). | ||
+ | |||
+ | I have a couple of concerns regarding the evaluation criteria, particularly related to the second most salient level: | ||
+ | * Should it be mandatory for participants to look for more than one appropriate tempo for a given input signal? | ||
+ | * I can see from your description of the data collection that extracting two levels is not too hard, however is it generally intuitive whether this secondary level is faster or slower than the primary level? I wonder if it might be more valuable to find something more explicit, like the tatum (fastest metrical level), or time-signature. I'm not sure if these would count as perceptual tempi if no one actually chooses to tap that quickly or slowly. | ||
+ | * In cases (if any) where there is complete agreement on the perceptual level, how would the second most salient level be defined? | ||
+ | I think you're right to suggest a tempo dependent threshold, but I'm interested as to where this value of 3% comes from. Might it be a little too strict? Was this the value suggested for last year's contest? | ||
+ | |||
+ | Given that your annotated data for perceptual tempo is derived from subjects 'tapping' along to music, it seems worthwhile expanding the scope of this proposal to include phase information and tempo following. Perhaps optionally making this a tempo and beat tracking contest. Again I'm aware of the potential problems in deriving a globally acceptable strategy for the evaluation of beat locations, (current examples include: Goto-97 the longest continuous correctly tracked segment, or Scheirer-98 RMS deviation between algorithm and annotated beats) but I think this is a factor which should be addressed. | ||
==Review 2== | ==Review 2== | ||
Line 79: | Line 123: | ||
I appreciate the fact that the potential participants already own a large amount of annotated data, so that the work to annotate new data will be limited. However it seems that a large number of listeners is needed for annotation, because several perceptual tempos are taken into account for evaluation. Would it be possible to propose evaluation measures that are relevant whatever the number of annotaters (probably less than five annotaters will be available for new annotations) ? Or to evaluate performance differently on each file depending on the amount of annotaters ? | I appreciate the fact that the potential participants already own a large amount of annotated data, so that the work to annotate new data will be limited. However it seems that a large number of listeners is needed for annotation, because several perceptual tempos are taken into account for evaluation. Would it be possible to propose evaluation measures that are relevant whatever the number of annotaters (probably less than five annotaters will be available for new annotations) ? Or to evaluate performance differently on each file depending on the amount of annotaters ? | ||
+ | |||
+ | ==Downie's Comments== | ||
+ | |||
+ | 1. Interesting task with more complexity than meets the eye. | ||
+ | |||
+ | 2. What kinds of variations will this include in tempi and meters? Will 7/8 and 5/4 and so on be included? Does it matter (I am not a tempo expert, but I am genuinely curious). | ||
+ | |||
+ | 2. How many examples will be needed to make the evaluation meaningful? | ||
+ | |||
+ | 3. What do the data files look like? | ||
+ | |||
+ | 4. Not really sure what this really means "50 30-second excerpts annotated by 40 subjects..." could you clarify please. This relates to question #3. |
Latest revision as of 23:29, 9 October 2005
Contents
*Training Data Set*
Download (Windows) : https://www.music-ir.org/evaluation/MIREX/data/tempo_contest_data/train.zip.1
Martin McKinney's README comments: https://www.music-ir.org/evaluation/MIREX/data/tempo_contest_data/README.txt
Proposer
Martin F. McKinney (Philips) mckinney@alum.mit.edu
Dirk Moelants (IPEM, Ghent University) dirk@moelants.net
Title
Automatic tempo extraction
Description
This contest will compare current methods for the extraction of tempo from musical audio. We distinguish between notated tempo and perceptual tempo and will test for the extraction of perceptual tempo. We will also test for tempo following if there is enough interest.
We differentiate between notated tempo and perceived tempo. If you have the notated tempo (e.g., from the score) it is straightforward attach a tempo annotation to an excerpt and run a contest for algorithms to predict the notated tempo. For excerpts for which we have no "official" tempo annotation, we can also annotate the *perceived* tempo. This is not a straightforward task and needs to be done carefully. If you ask a group of listeners (including skilled musicians) to annotate the tempo of music excerpts, they can give you different answers (they tap at different metrical levels) if they are unfamiliar with the piece. For some excerpts the perceived pulse or tempo is less ambiguous and everyone taps at the same metrical level, but for other excerpts the tempo can be quite ambiguous and you get a complete split across listeners.
The annotation of perceptual tempo can take several forms: a probability density function as a function of tempo; a series of tempos, ranked by their respective perceptual salience; etc. These measures of perceptual tempo can be used as a ground truth on which to test algorithms for tempo extraction. The dominant perceived tempo is sometimes the same as the notated tempo but not always. A piece of music can "feel" faster or slower than it's notated tempo in that the dominant perceived pulse can be a metrical level higher or lower than the notated tempo.
There are several reasons to examine the perceptual tempo, either in place of or in addition to the notated tempo. For many applications of automatic tempo extractors, the perceived tempo of the music is more relevant than the notated tempo. An automatic playlist generator or music navigator, for instance, might allow listeners to select or filter music by its (automatically extracted) tempo. In this case, the "feel", or perceptual tempo may be more relevant than the notated tempo. An automatic DJ apparatus might also perform better with a representation of perceived tempo rather than notated tempo.
A more pragmatic reason for using perceptual tempo rather than notated tempo as a ground truth for our contest is that we simply do not have the notated tempo of our test set. If we notate it by having a panel of expert listeners tap along and label the excerpts, we are by default dealing with the perceived tempo. The handling of this data as ground truth must be done with care.
Participants
- Miguel Alonso (ENST), miguel.alonso@enst.fr
- George Tzanetakis (University of Victoria), gtzan@cs.uvic.ca
- Matthew Davies and Paul Brossier (Queen Mary, University of London), matthew.davies@elec.qmul.ac.uk, paul.brossier@elec.qmul.ac.uk
- Bill Sethares (University of Wisconsin-Madison), sethares@ece.wisc.edu
- Fabien Gouyon (University Pompeu Fabra) and Simon Dixon (OFAI), fgouyon@iua.upf.es, simon@oefai.at
- Christian Uhle (Fraunhofer Institut), uhle@idmt.fhg.de
- Geoffroy Peeters (IRCAM), peeters@ircam.fr
- Douglas Eck (University of Montreal), eckdoug@iro.umontreal.ca
Other Potential Participants
- Anssi Klapuri (Tampere University of Technology), klap@cs.tut.fi
- Werner van Belle (werner.van.belle@itek.norut.no)
Evaluation Procedures
This section focuses on the mechanics of the method while we discuss the data (music excerpts and perceptual data) in the next section. There are two general steps to the method: 1) collection of perceptual tempo annotations; and 2) evaluation of tempo extraction algorithms.
1) Perceptual tempo data collection
The following procedure is described in more detail in McKinney and Moelants (2004) and Moelants and McKinney (2004). Listeners will be asked to tap to the beat of a series of musical excerpts. Responses will be collected and their perceived tempo will be calculated. For each excerpt, a distribution of perceived tempo will be generated. A relatively simple form of perceived tempo is proposed for this contest: The two highest peaks in the perceived tempo distribution for each excerpt will be taken, along with their respective heights (normalized to sum to 1.0) as the two tempo candidates for that particular excerpt. The height of a peak in the distribution is assumed to represent the perceptual salience of that tempo. In addition to tempo, the phase and tapping times of listeners will also be recorded to evaluation of phase-locking ability of tempo-extraction algorithms.
References:
- McKinney, M.F. and Moelants, D. (2004), Deviations from the resonance theory of tempo induction, Conference on Interdisciplinary Musicology, Graz. URL: http://gewi.kfunigraz.ac.at/~cim04/CIM04_paper_pdf/McKinney_Moelants_CIM04_proceedings_t.pdf
- Moelants, D. and McKinney, M.F. (2004), Tempo perception and musical content: What makes a piece slow, fast, or temporally ambiguous? International Conference on Music Perception & Cognition, Evanston, IL. URL: http://www.northwestern.edu/icmpc/proceedings/ICMPC8/PDF/AUTHOR/MP040237.PDF
2) Evaluation of tempo extraction algorithms
Algorithms will process musical excerpts and return the following data: Two tempos (T1 and T2, BPM, where T1 is the slower of the two tempos), relative salience/srength of T1 (ST1, normalized so that ST1 + ST2 = 1.0), and the phases of T1 and T2 (P1 and P2, sec from beginning of audio file to the first beat or an integer multiple of the beat).
- Task TT1: Ability to identify T1 to within 8%
- Task TT2: Ability to identify T2 to within 8%
- Task TT1I: Ability to identify an acceptable (see below) integer multiple/fraction of T1 to within 8% (given if Task TT1 is correct)
- Task TT2I: Ability to identify an acceptable (see below) integer multiple/fraction of T2 to within 8% (given if Task TT2 is correct)
- Task TST1: Ability to identify the relative strength of the T1
- Task TP1: Ability to correctly identify phase of T1 to within 15% of the T1 beat period (N/A if Task TT1 is incorrect)
- Task TP2: Ability to correctly identify phase of T2 to within 15% of the T2 beat period (N/A if Task TT2 is incorrect)
Each task (except for TST1) will receive a score of 1.0 for correct evaulation, 0.0 for incorrect evaluation. For a given algorithm, the performance, P, for each audio excerpt will be given by the following equation:
P = 0.25 * TT1 + 0.25 * TT2 + 0.10 * TT1I + 0.10 * TT2I + 0.20 * (1.0 - |ST1 - ST1GT|/max(ST1,ST1GT)) + 0.05 * TP1 + 0.05 * TP2
where ST1GT is the ground truth data for the salience of T1. Tasks TT1I and TT2I will be assumed correct if the tempo identification tasks (TT1 and TT2, respectively) are performed correctly. Acceptable integers for Tasks TT1I and TT2I will be based upon examination of the meter of individual excerpts and of the distributions of their tapped tempi (e.g., 3 and 1/3 for tertiary meters). Tasks TST1 and TP1 will be assumed incorrect if Task TT1 is performed incorrectly. Task TP2 will be assumed incorrect if Task TT2 is performed incorrectly. If the ground-truth T1 is reported on T2, it will be accepted as correct. If the ground-truth T2 is reported on T1, it will also be accepted as correct and ST2 (calculated from: ST2 = 1.0 - ST1) will be taken as the new ST1.
The algorithm with the best average P-score will win the contest. We can also analyze the scores of individual tasks as well. We will provide some measures of statistical significance to the results, most likely through bootstrapping the test data.
Relevant Test Collections
We will use a collection of 160 musical exerpts for the evaluation procedure. 40 of the excerpts have been taken from one of our previous experiments (See McKinney/Moelants ICMPC paper above). We are currently running tapping experiments to evaluate 120 new excerpts. These new excerpts were taken from our local collections.
Excerpts were selected to provide:
- stable tempo within each excerpt
- a good distribution of tempi across excerpts
- a large variety of instrumentation and beat strengths (with and without percussion)
- a variation of musical styles, including many non-western styles
- the presence of non-binary meters (about 20% have a ternary element and there are a few examples with odd or changing meter).
We will provide 20 excerpts with ground truth data for participants to try/tune their algorithms before submission. The remaining 140 excerpts will be novel to all participants.
Concerning copyright issues: I'm not sure if there will be any issues here if all music is simply collected in one place and then the contest algorithms are run there. In addition, I've heard that it is legal to use/distribute short excerpts of recorded audio without violation. Can anyone confirm/deny or provide more info on copyright issues for short excerpts?
Hello,I believe the following link will clarify the copyright issues concerning music databases for research purposes.There wont be any problems for short pieces if used for research purposes without any profits being derived by the usage.......................http://www.fxpal.com/people/foote/musicr/copyright.html
Balaji Thoshkahna, Learning systems and multimedia labs,EE, Indian Institute of Science.(IISc)
Review 1
I think that your proposal is clearly written and definitely appropriate for ISMIR. I agree with your justification for the analysis of perceptual tempo, especially for applications related to human interaction. However, in order to build upon last year's contest I would support the inclusion of 'phase locking' and 'tempo following' as areas to investigate under this proposal, in addition to some further consideration of the evaluation procedures.
I think your list of participants is realistic - equally I believe many of these people have published work on beat tracking as well as tempo analysis, which suggests there should be support for an expanded proposal.
In terms of the data to be analyzed, I agree that longer excerpts are necessary, especially if the proposal is to be expanded to incorporate tempo following and phase information. I wonder if it might be interesting to classify input signals not only by genre (as you suggest), but also by the presence or absence of percussion. This would be another way to demonstrate the generality of the entered algorithms, but might also provide some further insight into those signals for which the perceptual tempo is most open to subjective interpretation - put simply: is there more agreement (computationally and in annotations) when drums are present? I would also like to see some consideration given to examples that aren't in 4/4 time, as well those which are heavily syncopated (if not already present in the proposed databases).
I have a couple of concerns regarding the evaluation criteria, particularly related to the second most salient level:
- Should it be mandatory for participants to look for more than one appropriate tempo for a given input signal?
- I can see from your description of the data collection that extracting two levels is not too hard, however is it generally intuitive whether this secondary level is faster or slower than the primary level? I wonder if it might be more valuable to find something more explicit, like the tatum (fastest metrical level), or time-signature. I'm not sure if these would count as perceptual tempi if no one actually chooses to tap that quickly or slowly.
- In cases (if any) where there is complete agreement on the perceptual level, how would the second most salient level be defined?
I think you're right to suggest a tempo dependent threshold, but I'm interested as to where this value of 3% comes from. Might it be a little too strict? Was this the value suggested for last year's contest?
Given that your annotated data for perceptual tempo is derived from subjects 'tapping' along to music, it seems worthwhile expanding the scope of this proposal to include phase information and tempo following. Perhaps optionally making this a tempo and beat tracking contest. Again I'm aware of the potential problems in deriving a globally acceptable strategy for the evaluation of beat locations, (current examples include: Goto-97 the longest continuous correctly tracked segment, or Scheirer-98 RMS deviation between algorithm and annotated beats) but I think this is a factor which should be addressed.
Review 2
The problem is a relevant MIR task which is clearly defined. The proposed participants seem likely to participate indeed.
I do not have much to say, since this proposal is already very solid.
I appreciate the fact that the potential participants already own a large amount of annotated data, so that the work to annotate new data will be limited. However it seems that a large number of listeners is needed for annotation, because several perceptual tempos are taken into account for evaluation. Would it be possible to propose evaluation measures that are relevant whatever the number of annotaters (probably less than five annotaters will be available for new annotations) ? Or to evaluate performance differently on each file depending on the amount of annotaters ?
Downie's Comments
1. Interesting task with more complexity than meets the eye.
2. What kinds of variations will this include in tempi and meters? Will 7/8 and 5/4 and so on be included? Does it matter (I am not a tempo expert, but I am genuinely curious).
2. How many examples will be needed to make the evaluation meaningful?
3. What do the data files look like?
4. Not really sure what this really means "50 30-second excerpts annotated by 40 subjects..." could you clarify please. This relates to question #3.