Difference between revisions of "2009:Structural Segmentation"
(added structure to the page, polished a few things up) |
(→Issues and Discussion) |
||
Line 73: | Line 73: | ||
It seems like this is a very deep and far-reaching issue, one that probably lies at the heart of the structural segmentation task. Maybe we need a few more follow-ups to Lukashevich before this gets ironed out at MIREX! --[[User:jordan|Jordan]] 23:12, 17 August 2009 (UTC) | It seems like this is a very deep and far-reaching issue, one that probably lies at the heart of the structural segmentation task. Maybe we need a few more follow-ups to Lukashevich before this gets ironed out at MIREX! --[[User:jordan|Jordan]] 23:12, 17 August 2009 (UTC) | ||
+ | |||
+ | |||
+ | An attempt to revive the discussion. If we want to run the task this year (or at all), some compromises have to be made. I think we all can agree that the hierarchical nature of music piece structure is a problem that will need to be addressed at some point. However, I am not at all sure if it should be addressed now. It would be more important to get the ball rolling, run the task, and then revise it in the following years to be more accurate. I'd see the task to be defined the following points (and please, do comment on these): | ||
+ | * The provided ground truth is taken as it is, and the algorithms should replicate the way it was produced. Meaning, no hierarchical evaluation attempts etc. People provide different segmentations and groupings on the same song, but let's just take an engineering approach to mimic an "average" music listener. | ||
+ | * No overlapping segments. The ending of the previous indicates the start of the next one. The segments cover the entire duration of the piece. (Special handling of "Silence" label in ground truth should be taken to exclude them from the actual evaluation.) | ||
+ | * Two subtasks: segmentation and segmentation&grouping. In plain segmentation only the locations of the borders matter. Evaluation can be made with precision/recall with some/multiple acceptance window(s) (as Geoffroy proposed). In the second, piece is segmented and the segments are grouped (all verses in one, all choruses in another etc). We can use all the three evaluation measures as none of these depend on the absolute labels: clustering F-measure (Levy), under/oversegmentation (Lukashevich), modelling error (Peeters), and maybe something else? | ||
+ | * The label assigned to a segment serves only to indicate the grouping, and the evaluation should handle this. | ||
+ | * For each song, only the acoustic data is provided, no any additional information. It is the job of the algorithm to do the analysis. | ||
+ | --[[User:Paulus|Paulus]] 14:53, 2 September 2009 (UTC) |
Revision as of 09:53, 2 September 2009
Contents
Description
The segment structure (or form) is one of the most important musical parameters. It is furthermore special because musical structure -- especially in popular music genres -- is accessible to everybody: it needs no particular musical knowledge.
Data format
Input
Single channel, CD-quality audio (PCM, 16-bit, 44100 Hz).
Output
Three column text file of the format
<onset_time> <offset_time> <label> <onset_time> <offset_time> <label> ...
This is the same as Chris Harte's chord labelling files (.lab), and so is the same format as the ground truth as well. Onset and offset times are given in seconds, and the labels are simply letters: 'A', 'B', ... with segments referring to the same structural element having the same label.
Ground Truth
Ground truth data on audio is available for more than 200 songs, so given a quality measure everyone agrees on, evaluation wouldn't be harder than on other MIREX tasks. At the last ISMIR conference Lukashevich proposed a measure for segmentation evaluation.
Potential Participants
Matthias Mauch, Queen Mary, University of London --Matthias 08:49, 30 June 2009 (UTC)
Maarten Grachten, Johannes Kepler University, Linz, Austria -- Maarten
Geoffroy Peeters, IRCAM, Paris, France (depending on the kind of annotations)
Jouni Paulus, Tampere University of Technology, Finland
Stephan Huebler, Technical University of Dresden, Germany -- Stephan
Jordan Smith, McGill University, Montreal, Canada
Issues and Discussion
Thanks for the initiative! I might be interested in participating. Are you referring to segmentation of audio, or symbolic data? What set of annotated data did you refer to? [Maarten Grachten]
Yes, sorry, forgot to specify that. I'm mainly interested in audio, so I changed that above. --Matthias 11:04, 30 June 2009 (UTC)
The more the merrier: I could as well throw in the algo I implemented 2 years ago for my thesis [1]. I'm also curious about the annotated data mentioned. Thanks for your effort! --Ewald 17:33, 1 July 2009 (UTC)
Regarding ground truth: at Queen Mary we have the complete Beatles segmentations (with starts at bar beginnings), plus tens of other songs by Carole King, Queen, and Zweieck. We could leave the latter three untouched (i.e. I would not train my own algorithm on them), or publish them soon, so everyone can train their method on them. --Matthias 16:07, 7 August 2009 (UTC)
Defining the segment: In my opinion a segment would be a state with similar acoustical content (like in Lukashevich). I just want to make clear what the algo should do. --Stephan 10:04, 10 August 2009 (UTC)
Some notes: The proposed output with Wavesurfer -like format is probably the best at this first go at the task. For the evaluation metric: I'd propose using both the F-measure for frame pairs (as per Levy&Sandler) and the over/under segmentation measure by Lukashevich because they provide slightly different information. Both of these assume a "state" based description of the structure, so the hierarchical differences will not be handled very gracefully (hierarchical differences do exist if different persons annotate the same piece and a better metric should perhaps be developed at some point). Still, for the sake of simplicity the would be adequate for the task. The question of the data is bit more interesting. We used three different data sets in a recent publication: a large in-house set that can't be distributed even for MIREX, 174 songs by The Beatles from UPF and from TUT, and RWC Pop. The last two of these are publicly available, so basically anybody could train the system with them (if there is something to train). --Paulus 13:21, 10 August 2009 (UTC)
Some comments:
1) Using acoustical similarities would be the best (therefore we must be carefull with some test-sets merging acoustical similarities description with timeline-based description such as "intro" or "outro"; how do we deal with this timeline-based description ?). A deep analysis of the content of each test-set will be necessary in order to do that. We could share this work for those interrested.
2) concerning evaluation, I would be in favor of having
- A) a measure of the segmentation precision for a set of precision windows (using Recall, Precision, F-measure curves versus Precision Window)
- B) a labeling/segmentation measure: for this the Normalized Conditionnal Entropy [Lukashevich2008] is OK, or the more demanding "modeling error" obtained by aligning (without re-use) the annotated and estimated labels [Peeters2007]
3) Defining the number of labels would help a lot (some test-set used a very restricted vocabulary, some others a very large one); or giving the possibility to output an estimated hierarchical structure --Peeters 17:28, 17 August 2009 (UTC)
Prompted by Ehmann's email, I'm wondering about how to really factor hierarchical levels into the evaluation. We would want to have the evaluation not penalize a misestimation of structural scale (rather than of actual structural organization), but unless I'm mistaken, the only ground truth we have is at a single structural scale. For instance, if a song's ground truth was ABABCABCC, we may hope that an estimated analysis of DDCDCC (which is correct except that, say, it didn't subdivide a verse D into its 2 important sub-units AB) would be correct. Unless we have ground truth annotated at several hierarchical levels, we can't do this comparison.
But: we could allow the algorithms to produce for each song not a single structural estimation, but a hierarchical tree of related estimates. So for this example, an algorithm may answer "AABABB ~ (AB)(AB)C(AB)CC ~ (AA'B)(AA'B)(CD)(AA'B)(CD)(CD) ~ ..." and so forth.
This potentially addresses the issue of not knowing ahead of time how many different labels to expect: naturally, at larger scales, fewer labels may be necessary. And hopefully, whichever of one's guesses winds up being at the "correct" scale will have also correctly guessed the number of labels.
One obvious drawback to this: this kind of output is dramatically different (I think) from most people's current algorithms, and to redesign these in less than a month is perhaps unfeasible. The alternative is to get multi-scale annotations, but I don't think we have that either. Which leaves the 3rd alternative: forgetting hierarchical concerns altogether and just getting something that works well enough.
It seems like this is a very deep and far-reaching issue, one that probably lies at the heart of the structural segmentation task. Maybe we need a few more follow-ups to Lukashevich before this gets ironed out at MIREX! --Jordan 23:12, 17 August 2009 (UTC)
An attempt to revive the discussion. If we want to run the task this year (or at all), some compromises have to be made. I think we all can agree that the hierarchical nature of music piece structure is a problem that will need to be addressed at some point. However, I am not at all sure if it should be addressed now. It would be more important to get the ball rolling, run the task, and then revise it in the following years to be more accurate. I'd see the task to be defined the following points (and please, do comment on these):
- The provided ground truth is taken as it is, and the algorithms should replicate the way it was produced. Meaning, no hierarchical evaluation attempts etc. People provide different segmentations and groupings on the same song, but let's just take an engineering approach to mimic an "average" music listener.
- No overlapping segments. The ending of the previous indicates the start of the next one. The segments cover the entire duration of the piece. (Special handling of "Silence" label in ground truth should be taken to exclude them from the actual evaluation.)
- Two subtasks: segmentation and segmentation&grouping. In plain segmentation only the locations of the borders matter. Evaluation can be made with precision/recall with some/multiple acceptance window(s) (as Geoffroy proposed). In the second, piece is segmented and the segments are grouped (all verses in one, all choruses in another etc). We can use all the three evaluation measures as none of these depend on the absolute labels: clustering F-measure (Levy), under/oversegmentation (Lukashevich), modelling error (Peeters), and maybe something else?
- The label assigned to a segment serves only to indicate the grouping, and the evaluation should handle this.
- For each song, only the acoustic data is provided, no any additional information. It is the job of the algorithm to do the analysis.
--Paulus 14:53, 2 September 2009 (UTC)