https://www.music-ir.org/mirex/w/index.php?title=2019:Structural_Segmentation&feed=atom&action=history2019:Structural Segmentation - Revision history2021-07-28T21:18:59ZRevision history for this page on the wikiMediaWiki 1.31.1https://www.music-ir.org/mirex/w/index.php?title=2019:Structural_Segmentation&diff=12904&oldid=prevYun Hao: Created page with "== Description == The aim of the MIREX structural segmentation evaluation is to identify the key structural sections in musical audio. The segment structure (or form) is one..."2019-03-07T22:13:08Z<p>Created page with "== Description == The aim of the MIREX structural segmentation evaluation is to identify the key structural sections in musical audio. The segment structure (or form) is one..."</p>
<p><b>New page</b></p><div>== Description ==<br />
<br />
The aim of the MIREX structural segmentation evaluation is to identify the key structural sections in musical audio. The segment structure (or form) is one of the most important musical parameters. It is furthermore special because musical structure -- especially in popular music genres (e.g. verse, chorus, etc.) -- is accessible to everybody: it needs no particular musical knowledge. This task was first run in 2009.<br />
<br />
== Data == <br />
<br />
=== Collections ===<br />
* The MIREX 2009 Collection: 297 pieces, most of it derived from the work of the Beatles.<br />
<br />
* MIREX 2010 RWC collection. 100 pieces of popular music. There are two ground truths. The first is the one originally included with the RWC dataset. The explanation of the second set of annotations can be found at http://hal.inria.fr/docs/00/47/34/79/PDF/PI-1948.pdf. The second set of annotations contains no labels for segments, but rather provides an annotation of segment boundaries.<br />
<br />
* MIREX 2012 dataset. The new data set contains over 1,000 annotated pieces covering a range of musical styles. The majority of the pieces have been annotated by two independent annotators. <br />
<br />
=== Audio Formats ===<br />
<br />
* CD-quality (PCM, 16-bit, 44100 Hz)<br />
* single channel (mono)<br />
<br />
== Submission Format ==<br />
<br />
Submissions to this task will have to conform to a specified format detailed below. Submissions should be packaged and contain at least two files: The algorithm itself and a README containing contact information and detailing, in full, the use of the algorithm.<br />
<br />
=== Input Data ===<br />
Participating algorithms will have to read audio in the following format:<br />
<br />
* Sample rate: 44.1 KHz<br />
* Sample size: 16 bit<br />
* Number of channels: 1 (mono)<br />
* Encoding: WAV <br />
<br />
=== Output Data ===<br />
<br />
The structural segmentation algorithms will return the segmentation in an ASCII text file for each input .wav audio file. The specification of this output file is immediately below.<br />
<br />
=== Output File Format (Structural Segmentation) ===<br />
<br />
The Structural Segmentation output file format is a tab-delimited ASCII text format. This is the same as Chris Harte's chord labelling files (.lab), and so is the same format as the ground truth as well. Onset and offset times are given in seconds, and the labels are simply letters: 'A', 'B', ... with segments referring to the same structural element having the same label.<br />
<br />
Three column text file of the format<br />
<br />
<onset_time(sec)>\t<offset_time(sec)>\t<label>\n<br />
<onset_time(sec)>\t<offset_time(sec)>\t<label>\n<br />
...<br />
<br />
where \t denotes a tab, \n denotes the end of line. The < and > characters are not included. An example output file would look something like:<br />
<br />
0.000 5.223 A<br />
5.223 15.101 B<br />
15.101 20.334 A<br />
<br />
=== Algorithm Calling Format ===<br />
<br />
The submitted algorithm must take as arguments a SINGLE .wav file to perform the structural segmentation on as well as the full output path and filename of the output file. The ability to specify the output path and file name is essential. Denoting the input .wav file path and name as %input and the output file path and name as %output, a program called foobar could be called from the command-line as follows:<br />
<br />
foobar %input %output<br />
foobar -i %input -o %output<br />
<br />
Moreover, if your submission takes additional parameters, foobar could be called like:<br />
<br />
foobar .1 %input %output<br />
foobar -param1 .1 -i %input -o %output <br />
<br />
If your submission is in MATLAB, it should be submitted as a function. Once again, the function must contain String inputs for the full path and names of the input and output files. Parameters could also be specified as input arguments of the function. For example: <br />
<br />
foobar('%input','%output')<br />
foobar(.1,'%input','%output')<br />
<br />
=== README File ===<br />
<br />
A README file accompanying each submission should contain explicit instructions on how to to run the program (as well as contact information, etc.). In particular, each command line to run should be specified, using %input for the input sound file and %output for the resulting text file.<br />
<br />
For instance, to test the program foobar with a specific value for parameter param1, the README file would look like:<br />
<br />
foobar -param1 .1 -i %input -o %output<br />
<br />
For a submission using MATLAB, the README file could look like:<br />
<br />
matlab -r "foobar(.1,'%input','%output');quit;"<br />
<br />
== Evaluation Procedures ==<br />
At the last ISMIR conference [http://ismir2008.ismir.net/papers/ISMIR2008_219.pdf Lukashevich] proposed a measure for segmentation evaluation. Because of the complexity of the structural segmentation task definition, several different evaluation measures will be employed to address different aspects. It should be noted that none of the evaluation measures cares about the true labels of the sections: they only denote the clustering. This means that it does not matter if the systems produce true labels such as "chorus" and "verse", or arbitrary labels such as "A" and "B".<br />
<br />
=== Boundary retrieval ===<br />
'''Hit rate''' Found segment boundaries are accepted to be correct if they are within 0.5s ([http://ismir2007.ismir.net/proceedings/ISMIR2007_p051_turnbull.pdf Turnbull et al. ISMIR2007]) or 3s ([http://dx.doi.org/10.1109/TASL.2007.910781 Levy & Sandler TASLP2008]) from a border in the ground truth. Based on the matched hits, ''boundary retrieval recall rate'', ''boundary retrieval precision rate'', and ''boundary retrieval F-measure'' are be calculated.<br />
<br />
'''Median deviation''' Two median deviation measure between boundaries in the result and ground truth are calculated: ''median true-to-guess'' is the median time from boundaries in ground truth to the closest boundaries in the result, and ''median guess-to-true'' is similarly the median time from boundaries in the result to boundaries in ground truth. ([http://ismir2007.ismir.net/proceedings/ISMIR2007_p051_turnbull.pdf Turnbull et al. ISMIR2007])<br />
<br />
=== Frame clustering ===<br />
Both the result and the ground truth are handled in short frames (e.g., beat or fixed 100ms). All frame pairs in a structure description are handled. The pairs in which both frames are assigned to the same cluster (i.e., have the same label) form the sets <math>P_E</math> (for the system result) and <math>P_A</math> (for the ground truth). The ''pairwise precision rate'' can be calculated by <math>P = \frac{|P_E \cap P_A|}{|P_E|}</math>, ''pairwise recall rate'' by <math>R = \frac{|P_E \cap P_A|}{|P_A|}</math>, and ''pairwise F-measure'' by <math>F=\frac{2 P R}{P + R}</math>. ([http://dx.doi.org/10.1109/TASL.2007.910781 Levy & Sandler TASLP2008])<br />
<br />
=== Normalised conditional entropies ===<br />
Over- and under segmentation based evaluation measures proposed in [http://ismir2008.ismir.net/papers/ISMIR2008_219.pdf Lukashevich ISMIR2008].<br />
Structure descriptions are represented as frame sequences with the associated cluster information (similar to the Frame clustering measure). Confusion matrix between the labels in ground truth and the result is calculated. The matrix C is of size |L_A| * |L_E|, i.e., number of unique labels in the ground truth times number of unique labels in the result. From the confusion matrix, the joint distribution is calculated by normalising the values with the total number of frames F:<br />
<br />
<math>p_{i,j} = C_{i,j} / F</math><br />
<br />
Similarly, the two marginals are calculated:<br />
<br />
<math>p_i^a = \sum_{j=1}^{|L_E|} C{i,j}/F</math>, and<br />
<br />
<math>p_j^e = \sum_{i=1}^{|L_A|} C{i,j}/F</math><br />
<br />
Conditional distributions:<br />
<br />
<math>p_{i,j}^{a|e} = C_{i,j} / \sum_{i=1}^{|L_A|} C{i,j}</math>, and<br />
<br />
<math>p_{i,j}^{e|a} = C_{i,j} / \sum_{j=1}^{|L_E|} C{i,j}</math><br />
<br />
The conditional entropies will then be<br />
<br />
<math>H(E|A) = - \sum_{i=1}^{|L_A|} p_i^a \sum_{j=1}^{|L_E|} p_{i,j}^{e|a} \log_2(p_{i,j}^{e|a})</math>, and<br />
<br />
<math>H(A|E) = - \sum_{j=1}^{|L_E|} p_j^e \sum_{i=1}^{|L_A|} p_{i,j}^{a|e} \log_2(p_{i,j}^{a|e})</math><br />
<br />
The final evaluation measures will then be the oversegmentation score<br />
<br />
<math>S_O = 1 - \frac{H(E|A)}{\log_2(|L_E|)}</math> , and the undersegmentation score<br />
<br />
<math>S_U = 1 - \frac{H(A|E)}{\log_2(|L_A|)}</math><br />
<br />
== Relevant Development Collections == <br />
*Jouni Paulus's [http://www.cs.tut.fi/sgn/arg/paulus/structure.html structure analysis page] links to a corpus of 177 Beatles songs ([http://www.cs.tut.fi/sgn/arg/paulus/beatles_sections_TUT.zip zip file]). The Beatles annotations are not a part of the TUTstructure07 dataset. That dataset contains 557 songs, a list of which is available [http://www.cs.tut.fi/sgn/arg/paulus/TUTstructure07_files.html here].<br />
<br />
*Ewald Peiszer's [http://www.ifs.tuwien.ac.at/mir/audiosegmentation.html thesis page] links to a portion of the corpus he used: 43 non-Beatles pop songs (including 10 J-pop songs) ([http://www.ifs.tuwien.ac.at/mir/audiosegmentation/dl/ep_groundtruth_excl_Paulus.zip zip file]).<br />
<br />
These public corpora give a combined 220 songs.<br />
<br />
== Time and hardware limits ==<br />
Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions will be imposed.<br />
<br />
A hard limit of 24 hours will be imposed on analysis times. Submissions exceeding this limit may not receive a result.<br />
<br />
<br />
== Potential Participants ==<br />
<br />
name / email<br />
<br />
== Discussion ==</div>Yun Hao