https://www.music-ir.org/mirex/w/api.php?action=feedcontributions&user=S.boeck&feedformat=atomMIREX Wiki - User contributions [en]2024-03-28T18:44:56ZUser contributionsMediaWiki 1.31.1https://www.music-ir.org/mirex/w/index.php?title=2016:Audio_Downbeat_Estimation_Results&diff=117582016:Audio Downbeat Estimation Results2016-07-28T06:20:43Z<p>S.boeck: added * to DBDR1/2 on RWC classical</p>
<hr />
<div>= Submitted Algorithms =<br />
<br />
{|class="wikitable" style="text-align: left;"<br />
|+ Algorithms submitted to the Automatic Downbeat Estimation task<br />
! width="80" | Submission code <br />
! width="200" | Submission name <br />
! width="80" style="text-align: center;" | Abstract <br />
! width="440" | Contributors<br />
|-<br />
!<br />
!<br />
|-<br />
! DBDR1<br />
| DB1_no_beatles || style="text-align: center;" | [https://www.music-ir.org/mirex/abstracts/2016/DBDR1.pdf PDF] || Simon Durand, Juan Bello, Bertrand David, Gael Richard<br />
|-<br />
! DBDR2<br />
| DB2_no_ballroom || style="text-align: center;" | [https://www.music-ir.org/mirex/abstracts/2016/DBDR2.pdf PDF] || Simon Durand, Juan Bello, Bertrand David, Gael Richard<br />
|-<br />
! KB1<br />
| beats_2013 || style="text-align: center;" | [https://www.music-ir.org/mirex/abstracts/2016/KBDW1.pdf PDF] || Florian Krebs, Sebastian Böck<br />
|-<br />
! KB2<br />
| beats_2015 || style="text-align: center;" | [https://www.music-ir.org/mirex/abstracts/2016/FK3.pdf PDF] || Florian Krebs, Sebastian Böck<br />
|-<br />
! BK4<br />
| joint_tracker || style="text-align: center;" | [https://www.music-ir.org/mirex/abstracts/2016/BK4.pdf PDF] || Sebastian Böck, Florian Krebs <br />
|-<br />
! DSR1<br />
| downbeater || style="text-align: center;" | [https://www.music-ir.org/mirex/abstracts/2016/DSR1.pdf PDF] || Matthew Davies, Adam Stark, Andrew Robertson<br />
|-<br />
! CD4<br />
| qm-barbeattracker || style="text-align: center;" | [https://www.music-ir.org/mirex/abstracts/2016/CD4.pdf PDF] || Matthew Davies, Chris Cannam<br />
|-<br />
|}<br />
<br />
= Results =<br />
<br />
{| class="wikitable sortable" border="1"<br />
|+ Results ballroom dataset<br />
|-<br />
! Algorithm<br />
! F-Measure<br />
! Precision<br />
! Recall<br />
|- <br />
| DBDR1*<br />
| 0.838 <br />
| 0.874 <br />
| 0.846 <br />
|- <br />
| DBDR2<br />
| 0.783 <br />
| 0.808 <br />
| 0.804 <br />
|- <br />
| BK4*<br />
| 0.908 <br />
| 0.906 <br />
| 0.917 <br />
|- <br />
| CD4<br />
| 0.412 <br />
| 0.416 <br />
| 0.419 <br />
|- <br />
| DSR1<br />
| 0.463 <br />
| 0.476 <br />
| 0.468 <br />
|- <br />
| KB1*<br />
| 0.898 <br />
| 0.888 <br />
| 0.917 <br />
|- <br />
| KB2*<br />
| 0.860 <br />
| 0.853 <br />
| 0.890 <br />
|}<br />
<br />
{| class="wikitable sortable" border="1"<br />
|+ Results beatles dataset<br />
|-<br />
! Algorithm<br />
! F-Measure<br />
! Precision<br />
! Recall<br />
|- <br />
| DBDR1<br />
| 0.849 <br />
| 0.861 <br />
| 0.868 <br />
|- <br />
| DBDR2*<br />
| 0.872 <br />
| 0.861 <br />
| 0.909 <br />
|- <br />
| BK4*<br />
| 0.865 <br />
| 0.872 <br />
| 0.876 <br />
|- <br />
| CD4<br />
| 0.604 <br />
| 0.586 <br />
| 0.642 <br />
|- <br />
| DSR1<br />
| 0.665 <br />
| 0.646 <br />
| 0.708 <br />
|- <br />
| KB1<br />
| 0.803 <br />
| 0.776 <br />
| 0.859 <br />
|- <br />
| KB2*<br />
| 0.818 <br />
| 0.799 <br />
| 0.870<br />
|-<br />
|}<br />
<br />
<br />
{| class="wikitable sortable" border="1"<br />
|+ Results carnatic dataset<br />
|-<br />
! Algorithm<br />
! F-Measure<br />
! Precision<br />
! Recall<br />
|- <br />
| DBDR1<br />
| 0.201 <br />
| 0.199 <br />
| 0.240 <br />
|- <br />
| DBDR2<br />
| 0.231 <br />
| 0.194 <br />
| 0.330 <br />
|- <br />
| BK4*<br />
| 0.369 <br />
| 0.290 <br />
| 0.566 <br />
|- <br />
| CD4<br />
| 0.186 <br />
| 0.154 <br />
| 0.258 <br />
|- <br />
| DSR1<br />
| 0.184 <br />
| 0.155 <br />
| 0.251 <br />
|- <br />
| KB1<br />
| 0.269 <br />
| 0.221 <br />
| 0.380 <br />
|- <br />
| KB2*<br />
| 0.330 <br />
| 0.263 <br />
| 0.487<br />
|}<br />
<br />
<br />
{| class="wikitable sortable" border="1"<br />
|+ Results turkish dataset<br />
|-<br />
! Algorithm<br />
! F-Measure<br />
! Precision<br />
! Recall<br />
|- <br />
| DBDR1<br />
| 0.306 <br />
| 0.292 <br />
| 0.379 <br />
|- <br />
| DBDR2<br />
| 0.415 <br />
| 0.360 <br />
| 0.554 <br />
|- <br />
| BK4*<br />
| 0.537 <br />
| 0.468 <br />
| 0.729 <br />
|- <br />
| CD4<br />
| 0.218 <br />
| 0.186 <br />
| 0.291 <br />
|- <br />
| DSR1<br />
| 0.317 <br />
| 0.281 <br />
| 0.411 <br />
|- <br />
| KB1<br />
| 0.352 <br />
| 0.301 <br />
| 0.498 <br />
|- <br />
| KB2*<br />
| 0.336 <br />
| 0.269 <br />
| 0.513<br />
|}<br />
<br />
<br />
{| class="wikitable sortable" border="1"<br />
<br />
|+ Results cretan dataset<br />
|-<br />
! Algorithm<br />
! F-Measure<br />
! Precision<br />
! Recall<br />
|- <br />
| DBDR1<br />
| 0.426 <br />
| 0.715 <br />
| 0.308 <br />
|- <br />
| DBDR2<br />
| 0.418 <br />
| 0.637 <br />
| 0.311 <br />
|- <br />
| BK4*<br />
| 0.635 <br />
| 0.951 <br />
| 0.476 <br />
|- <br />
| CD4<br />
| 0.250 <br />
| 0.377 <br />
| 0.188 <br />
|- <br />
| DSR1<br />
| 0.265 <br />
| 0.398 <br />
| 0.199 <br />
|- <br />
| KB1<br />
| 0.433 <br />
| 0.641 <br />
| 0.328 <br />
|- <br />
| KB2*<br />
| 0.443 <br />
| 0.661 <br />
| 0.334<br />
|}<br />
<br />
<br />
{| class="wikitable sortable" border="1"<br />
|+ Results hjdb dataset<br />
|-<br />
! Algorithm<br />
! F-Measure<br />
! Precision<br />
! Recall<br />
|- <br />
| DBDR1<br />
| 0.578 <br />
| 0.613 <br />
| 0.561 <br />
|- <br />
| DBDR2<br />
| 0.629 <br />
| 0.628 <br />
| 0.638 <br />
|- <br />
| BK4*<br />
| 0.970 <br />
| 0.970 <br />
| 0.970 <br />
|- <br />
| CD4<br />
| 0.334 <br />
| 0.341 <br />
| 0.329 <br />
|- <br />
| DSR1<br />
| 0.208 <br />
| 0.232 <br />
| 0.196 <br />
|- <br />
| KB1<br />
| 0.690 <br />
| 0.693 <br />
| 0.688 <br />
|- <br />
| KB2*<br />
| 0.851 <br />
| 0.854 <br />
| 0.848<br />
|}<br />
<br />
{| class="wikitable sortable" border="1"<br />
|+ Results rwc_classical dataset<br />
|-<br />
! Algorithm<br />
! F-Measure<br />
! Precision<br />
! Recall<br />
|- <br />
| DBDR1*<br />
| 0.527 <br />
| 0.570 <br />
| 0.529 <br />
|- <br />
| DBDR2*<br />
| 0.532 <br />
| 0.539 <br />
| 0.574 <br />
|- <br />
| BK4*<br />
| 0.599 <br />
| 0.659 <br />
| 0.598 <br />
|- <br />
| CD4<br />
| 0.174 <br />
| 0.189 <br />
| 0.185 <br />
|- <br />
| DSR1<br />
| 0.251 <br />
| 0.260 <br />
| 0.279 <br />
|- <br />
| KB1<br />
| 0.436 <br />
| 0.475 <br />
| 0.447 <br />
|- <br />
| KB2*<br />
| 0.428 <br />
| 0.459 <br />
| 0.444<br />
|}<br />
<br />
{| class="wikitable sortable" border="1"<br />
|+ Results gtzan dataset<br />
|-<br />
! Algorithm<br />
! F-Measure<br />
! Precision<br />
! Recall<br />
|- <br />
| DBDR1<br />
| 0.615 <br />
| 0.651 <br />
| 0.631 <br />
|- <br />
| DBDR2<br />
| 0.619 <br />
| 0.628 <br />
| 0.666 <br />
|- <br />
| BK4<br />
| 0.638 <br />
| 0.636 <br />
| 0.669 <br />
|- <br />
| CD4<br />
| 0.460 <br />
| 0.461 <br />
| 0.482 <br />
|- <br />
| DSR1<br />
| 0.397 <br />
| 0.397 <br />
| 0.423 <br />
|- <br />
| KB1<br />
| 0.630 <br />
| 0.647 <br />
| 0.634 <br />
|- <br />
| KB2<br />
| 0.647 <br />
| 0.665 <br />
| 0.653<br />
|}<br />
<br />
<nowiki>*</nowiki>) Rows marked by an asterisk should be taken with care as in those cases overlapping test and training sets were used. This could lead to overestimated metrics.<br />
<br />
= Runtime =<br />
All submissions finished the computations in less than 24 hours.</div>S.boeckhttps://www.music-ir.org/mirex/w/index.php?title=2016:Task_Captains&diff=116672016:Task Captains2016-02-19T08:16:21Z<p>S.boeck: </p>
<hr />
<div>Like ISMIR 2015, we are prepared to improve the distribution of tasks for the upcoming MIREX 2016. To do so, we really need leaders to help us organize and run each task.<br />
<br />
To volunteer to lead one or more tasks, please add your name in the "Captains" column.<br />
<br />
What does it mean to lead a task?<br />
* Update wiki pages as needed<br />
* Communicate with submitters and troubleshooting submissions<br />
* Execution and evaluation of submissions<br />
* Publishing final results<br />
<br />
Due to the proprietary nature of much of the data, the submission system, evaluation framework, and most of the datasets will continue to be hosted by IMIRSEL. However, we are prepared to provide access to task organizers to manage and run submissions on the IMIRSEL systems.<br />
<br />
<br />
{| class="wikitable" style="margin-left: 20px"<br />
!ID !! Task !! Captain(s)<br />
|-<br />
|abt<br />
|[[2016:Audio Beat Tracking]]<br />
|Sebastian Böck, Florian Krebs<br />
|-<br />
|ace<br />
|[[2016:Audio Chord Estimation]]<br />
|<br />
|-<br />
|act<br />
|[[2016:Audio Classification (Train/Test) Tasks]]<br />
|IMIRSEL<br />
|-<br />
|acs<br />
|[[2016:Audio Cover Song Identification]]<br />
|<br />
|-<br />
|ade<br />
|[[2016:Audio Downbeat Estimation]]<br />
|Florian Krebs, Sebastian Böck<br />
|-<br />
|akd<br />
|[[2016:Audio Key Detection]]<br />
|<br />
|-<br />
|ame<br />
|[[2016:Audio Melody Extraction]]<br />
|<br />
|-<br />
|ams<br />
|[[2016:Audio Music Similarity and Retrieval]]<br />
|IMIRSEL<br />
|-<br />
|aod<br />
|[[2016:Audio Onset Detection]]<br />
|Sebastian Böck<br />
|-<br />
|ate<br />
|[[2016:Audio Tempo Estimation]]<br />
|<br />
|-<br />
|atg<br />
|[[2016:Audio Tag Classification]]<br />
|<br />
|-<br />
|mf0<br />
|[[2016:Multiple Fundamental Frequency Estimation & Tracking]]<br />
|<br />
|-<br />
|qbsh<br />
|[[2016:Query by Singing/Humming]]<br />
|<br />
|-<br />
|scofo<br />
|[[2016:Real-time Audio to Score Alignment (a.k.a Score Following)]]<br />
|<br />
|-<br />
|sms<br />
|[[2016:Symbolic Melodic Similarity]]<br />
|<br />
|-<br />
|struct<br />
|[[2016:Structural Segmentation]]<br />
|IMIRSEL<br />
|-<br />
|drts<br />
|[[2016:Discovery of Repeated Themes & Sections]]<br />
|<br />
|-<br />
|sli<br />
|[[2016:Set List Identification ]]<br />
|<br />
|-<br />
|mscd<br />
|[[2016:Music/Speech Classification and Detection]]<br />
|<br />
|-<br />
|aod<br />
|[[2016:Audio Offset Detection ]]<br />
|<br />
|-<br />
|afp<br />
|[[2016:Audio_Fingerprinting]]<br />
|<br />
|-<br />
|svs<br />
|[[2016:Singing Voice Separation]]<br />
|<br />
|}</div>S.boeckhttps://www.music-ir.org/mirex/w/index.php?title=2016:Task_Captains&diff=116662016:Task Captains2016-02-19T08:06:47Z<p>S.boeck: added myself</p>
<hr />
<div>Like ISMIR 2015, we are prepared to improve the distribution of tasks for the upcoming MIREX 2016. To do so, we really need leaders to help us organize and run each task.<br />
<br />
To volunteer to lead one or more tasks, please add your name in the "Captains" column.<br />
<br />
What does it mean to lead a task?<br />
* Update wiki pages as needed<br />
* Communicate with submitters and troubleshooting submissions<br />
* Execution and evaluation of submissions<br />
* Publishing final results<br />
<br />
Due to the proprietary nature of much of the data, the submission system, evaluation framework, and most of the datasets will continue to be hosted by IMIRSEL. However, we are prepared to provide access to task organizers to manage and run submissions on the IMIRSEL systems.<br />
<br />
<br />
{| class="wikitable" style="margin-left: 20px"<br />
!ID !! Task !! Captain(s)<br />
|-<br />
|abt<br />
|[[2016:Audio Beat Tracking]]<br />
|Sebastian Böck, Florian Krebs<br />
|-<br />
|ace<br />
|[[2016:Audio Chord Estimation]]<br />
|<br />
|-<br />
|act<br />
|[[2016:Audio Classification (Train/Test) Tasks]]<br />
|IMIRSEL<br />
|-<br />
|acs<br />
|[[2016:Audio Cover Song Identification]]<br />
|<br />
|-<br />
|ade<br />
|[[2016:Audio Downbeat Estimation]]<br />
|Sebastian Böck, Florian Krebs<br />
|-<br />
|akd<br />
|[[2016:Audio Key Detection]]<br />
|<br />
|-<br />
|ame<br />
|[[2016:Audio Melody Extraction]]<br />
|<br />
|-<br />
|ams<br />
|[[2016:Audio Music Similarity and Retrieval]]<br />
|IMIRSEL<br />
|-<br />
|aod<br />
|[[2016:Audio Onset Detection]]<br />
|Sebastian Böck<br />
|-<br />
|ate<br />
|[[2016:Audio Tempo Estimation]]<br />
|<br />
|-<br />
|atg<br />
|[[2016:Audio Tag Classification]]<br />
|<br />
|-<br />
|mf0<br />
|[[2016:Multiple Fundamental Frequency Estimation & Tracking]]<br />
|<br />
|-<br />
|qbsh<br />
|[[2016:Query by Singing/Humming]]<br />
|<br />
|-<br />
|scofo<br />
|[[2016:Real-time Audio to Score Alignment (a.k.a Score Following)]]<br />
|<br />
|-<br />
|sms<br />
|[[2016:Symbolic Melodic Similarity]]<br />
|<br />
|-<br />
|struct<br />
|[[2016:Structural Segmentation]]<br />
|IMIRSEL<br />
|-<br />
|drts<br />
|[[2016:Discovery of Repeated Themes & Sections]]<br />
|<br />
|-<br />
|sli<br />
|[[2016:Set List Identification ]]<br />
|<br />
|-<br />
|mscd<br />
|[[2016:Music/Speech Classification and Detection]]<br />
|<br />
|-<br />
|aod<br />
|[[2016:Audio Offset Detection ]]<br />
|<br />
|-<br />
|afp<br />
|[[2016:Audio_Fingerprinting]]<br />
|<br />
|-<br />
|svs<br />
|[[2016:Singing Voice Separation]]<br />
|<br />
|}</div>S.boeckhttps://www.music-ir.org/mirex/w/index.php?title=2015:Task_Captains&diff=108662015:Task Captains2015-03-27T09:05:47Z<p>S.boeck: Added myself</p>
<hr />
<div>Like ISMIR 2014, we are prepared to improve the distribution of tasks for the upcoming MIREX 2015. To do so, we really need leaders to help us organize and run each task.<br />
<br />
To volunteer to lead one or more tasks, please add your name in the "Captains" column.<br />
<br />
What does it mean to lead a task?<br />
* Update wiki pages as needed<br />
* Communicate with submitters and troubleshooting submissions<br />
* Execution and evaluation of submissions<br />
* Publishing final results<br />
<br />
Due to the proprietary nature of much of the data, the submission system, evaluation framework, and most of the datasets will continue to be hosted by IMIRSEL. However, we are prepared to provide access to task organizers to manage and run submissions on the IMIRSEL systems.<br />
<br />
<br />
{| class="wikitable" style="margin-left: 20px"<br />
!ID !! Task !! Captain(s)<br />
|-<br />
|abt<br />
|[[2015:Audio Beat Tracking]]<br />
|, Sebastian Böck<br />
|-<br />
|ace<br />
|[[2015:Audio Chord Estimation]]<br />
|<br />
|-<br />
|act<br />
|[[2015:Audio Classification (Train/Test) Tasks]]<br />
|<br />
|-<br />
|acs<br />
|[[2015:Audio Cover Song Identification]]<br />
|<br />
|-<br />
|ade<br />
|[[2015:Audio Downbeat Estimation]]<br />
|Florian Krebs, Sebastian Böck<br />
|-<br />
|akd<br />
|[[2015:Audio Key Detection]]<br />
|<br />
|-<br />
|ame<br />
|[[2015:Audio Melody Extraction]]<br />
|<br />
|-<br />
|ams<br />
|[[2015:Audio Music Similarity and Retrieval]]<br />
|<br />
|-<br />
|aod<br />
|[[2015:Audio Onset Detection]]<br />
|Sebastian Böck<br />
|-<br />
|ate<br />
|[[2015:Audio Tempo Estimation]]<br />
|<br />
|-<br />
|atg<br />
|[[2015:Audio Tag Classification]]<br />
|<br />
|-<br />
|mf0<br />
|[[2015:Multiple Fundamental Frequency Estimation & Tracking]]<br />
|<br />
|-<br />
|qbsh<br />
|[[2015:Query by Singing/Humming]]<br />
|<br />
|-<br />
|qbt<br />
|[[2015:Query by Tapping]]<br />
|<br />
|-<br />
|scofo<br />
|[[2015:Real-time Audio to Score Alignment (a.k.a Score Following)]]<br />
|<br />
|-<br />
|sms<br />
|[[2015:Symbolic Melodic Similarity]]<br />
|<br />
|-<br />
|struct<br />
|[[2015:Structural Segmentation]]<br />
|<br />
|-<br />
|drts<br />
|[[2015:Discovery of Repeated Themes & Sections]]<br />
|<br />
|-<br />
|afp<br />
|[[2015:Audio_Fingerprinting]]<br />
|<br />
|-<br />
|svs<br />
|[[2015:Singing_Voice_Separation]]<br />
|<br />
|-<br />
|kgc<br />
|[[2015:Audio K-POP Genre Classification]]<br />
|<br />
|-<br />
|kmc<br />
|[[2015:Audio K-POP Mood Classification]]<br />
|<br />
|}</div>S.boeckhttps://www.music-ir.org/mirex/w/index.php?title=2014:Audio_Tempo_Estimation&diff=103652014:Audio Tempo Estimation2014-08-18T15:15:57Z<p>S.boeck: /* Potential Participants */</p>
<hr />
<div>== Description ==<br />
This task compares current methods for the extraction of tempo from musical audio. We distinguish between notated tempo and perceptual tempo and will test for the extraction of perceptual tempo. <br />
<br />
We differentiate between notated tempo and perceived tempo. If you have the notated tempo (e.g., from the score) it is straightforward attach a tempo annotation to an excerpt and run a contest for algorithms to predict the notated tempo. For excerpts for which we have no "official" tempo annotation, we can also annotate the *perceived* tempo. This is not a straightforward task and needs to be done carefully. If you ask a group of listeners (including skilled musicians) to annotate the tempo of music excerpts, they can give you different answers (they tap at different metrical levels) if they are unfamiliar with the piece. For some excerpts the perceived pulse or tempo is less ambiguous and everyone taps at the same metrical level, but for other excerpts the tempo can be quite ambiguous and you get a complete split across listeners.<br />
<br />
The annotation of perceptual tempo can take several forms: a probability density function as a function of tempo; a series of tempos, ranked by their respective perceptual salience; etc. These measures of perceptual tempo can be used as a ground truth on which to test algorithms for tempo extraction. The dominant perceived tempo is sometimes the same as the notated tempo but not always. A piece of music can "feel" faster or slower than it's notated tempo in that the dominant perceived pulse can be a metrical level higher or lower than the notated tempo.<br />
<br />
There are several reasons to examine the perceptual tempo, either in place of or in addition to the notated tempo. For many applications of automatic tempo extractors, the perceived tempo of the music is more relevant than the notated tempo. An automatic playlist generator or music navigator, for instance, might allow listeners to select or filter music by its (automatically extracted) tempo. In this case, the "feel", or perceptual tempo may be more relevant than the notated tempo. An automatic DJ apparatus might also perform better with a representation of perceived tempo rather than notated tempo.<br />
<br />
A more pragmatic reason for using perceptual tempo rather than notated tempo as a ground truth for our contest is that we simply do not have the notated tempo of our test set. If we notate it by having a panel of expert listeners tap along and label the excerpts, we are by default dealing with the perceived tempo. The handling of this data as ground truth must be done with care.<br />
<br />
<br />
== Data ==<br />
=== Collections ===<br />
MIREX 2006 Tempo dataset collected by Martin F. McKinney (Philips) and Dirk Moelants (IPEM, Ghent University). Composed of 160 30-second clips in WAV format with annotated tempos. <br />
<br />
<br />
=== Audio Formats ===<br />
The data are monophonic sound files, with the associated onset times and data about the annotation robustness.<br />
<br />
* CD-quality (PCM, 16-bit, 44100 Hz)<br />
* single channel (mono)<br />
* 30 second clips<br />
<br />
<br />
== Submission Format ==<br />
Submissions to this task will have to conform to a specified format detailed below. Submissions should be packaged and contain at least two files: The algorithm itself and a README containing contact information and detailing, in full, the use of the algorithm.<br />
<br />
<br />
=== Input data ===<br />
Individual audio files in WAV format (30-second clips drawn from the 140 unseen tracks in the dataset). The audio recordings were selected to provide a stable tempo value, a wide distribution of tempi values, and a large variety of instrumentation and musical styles. About 20% of the files contain non-binary meters, and a small number of examples contain changing meters.<br />
<br />
<br />
=== Output Data ===<br />
Submitted programs should output two tempi (a slower tempo, T1, and a faster tempo, T2) as well as the strength of T1 relative to T2 (0-1). The relative strength ST2 (not output) is simply 1 - ST1. The tempo estimates from each algorithm should be written to a text file in the following format:<br />
<br />
T1<tab>T2<tab>ST1<br />
<br />
E.g.<br />
60 180 0.7<br />
<br />
<br />
=== Algorithm Calling Format ===<br />
<br />
The submitted algorithm must take as arguments a SINGLE .wav file to perform the tempo estimation detection on as well as the full output path and filename of the output file. The ability to specify the output path and file name is essential. Denoting the input .wav file path and name as ''%input'' and the output file path and name as ''%output'', a program called foobar could be called from the command-line as follows:<br />
<br />
foobar %input %output<br />
or<br />
foobar -i %input -o %output<br />
<br />
Moreover, if your submission takes additional parameters, foobar could be called like:<br />
<br />
foobar .1 %input %output<br />
foobar -param1 .1 -i %input -o %output <br />
<br />
If your submission is in MATLAB, it should be submitted as a function. Once again, the function must contain String inputs for the full path and names of the input and output files. Parameters could also be specified as input arguments of the function. For example: <br />
<br />
foobar('%input','%output')<br />
foobar(.1,'%input','%output')<br />
<br />
<br />
=== README File ===<br />
<br />
A README file accompanying each submission should contain explicit instructions on how to to run the program (as well as contact information, etc.). In particular, each command line to run should be specified, using %input for the input sound file and %output for the resulting text file.<br />
<br />
<br />
== Evaluation Procedures ==<br />
<br />
This section focuses on the mechanics of the method while we discuss the data (music excerpts and perceptual data) in the next section. There are two general steps to the method: 1) collection of perceptual tempo annotations; and 2) evaluation of tempo extraction algorithms.<br />
<br />
=== Perceptual tempo data collection ===<br />
<br />
The following procedure is described in more detail in McKinney and Moelants (2004) and Moelants and McKinney (2004). Listeners were asked to tap to the beat of a series of musical excerpts. Responses were collected and their perceived tempo was calculated. For each excerpt, a distribution of perceived tempo was generated. A relatively simple form of perceived tempo was proposed for this contest: The two highest peaks in the perceived tempo distribution for each excerpt were taken, along with their respective heights (normalized to sum to 1.0) as the two tempo candidates for that particular excerpt. The height of a peak in the distribution is assumed to represent the perceptual salience of that tempo. <br />
<br />
==== References ====<br />
* McKinney, M.F. and Moelants, D. (2004), Deviations from the resonance theory of tempo induction, Conference on Interdisciplinary Musicology, Graz. URL: http://www-gewi.uni-graz.at/staff/parncutt/cim04/CIM04_paper_pdf/McKinney_Moelants_CIM04_proceedings_t.pdf<br />
* Moelants, D. and McKinney, M.F. (2004), Tempo perception and musical content: What makes a piece slow, fast, or temporally ambiguous? International Conference on Music Perception & Cognition, Evanston, IL. URL: http://icmpc8.umn.edu/proceedings/ICMPC8/PDF/AUTHOR/MP040237.PDF <br />
<br />
=== Evaluation of tempo extraction algorithms ===<br />
Algorithms will process musical excerpts and return the following data: Two tempi in BPM (T1 and T2, where T1 is the slower of the two tempi). For a given algorithm, the performance, P, for each audio excerpt will be given by the following equation:<br />
<br />
P = ST1 * TT1 + (1 - ST1) * TT2<br />
<br />
where ST1 is the relative perceptual strength of T1 (given by groundtruth data, varies from 0 to 1.0), TT1 is the ability of the algorithm to identify T1 to within 8%, and TT2 is the ability of the algorithm to identify T2 to within 8%. No credit will be given for tempi other than T1 and T2.<br />
<br />
The algorithm with the best average P-score will achieve the highest rank in the task. <br />
<br />
<br />
== Relevant Test Collections ==<br />
We will use a collection of 160 musical exerpts for the evaluation procedure. 40 of the excerpts have been taken from one of McKinney/Moelants previous experiments (See McKinney/Moelants ICMPC paper above).<br />
<br />
Excerpts were selected to provide:<br />
<br />
* stable tempo within each excerpt<br />
* a good distribution of tempi across excerpts<br />
* a large variety of instrumentation and beat strengths (with and without percussion)<br />
* a variation of musical styles, including many non-western styles<br />
* the presence of non-binary meters (about 20% have a ternary element and there are a few examples with odd or changing meter). <br />
<br />
We will provide 20 excerpts with ground truth data for participants to try/tune their algorithms before submission. The remaining 140 excerpts will be novel to all participants.<br />
<br />
<br />
===Practice Data===<br />
You can find it here:<br />
<br />
https://www.music-ir.org/evaluation/MIREX/data/2006/beat/<br />
<br />
User: beattrack Password: b34trx<br />
<br />
https://www.music-ir.org/evaluation/MIREX/data/2006/tempo/<br />
<br />
User: tempo Password: t3mp0<br />
<br />
Data has been uploaded in both .tgz and .zip format.<br />
<br />
<br />
== Time and hardware limits ==<br />
Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions will be imposed.<br />
<br />
A hard limit of 8 hours will be imposed on analysis times. Submissions exceeding this limit may not receive a result.<br />
<br />
<br />
== Potential Participants ==<br />
name / email<br />
<br />
Michelle Daniels / michelledaniels (at) ucsd.edu<br />
<br />
Sebastian Böck / sebastian.boeck (at) jku.at</div>S.boeckhttps://www.music-ir.org/mirex/w/index.php?title=2014:Task_Captains&diff=99962014:Task Captains2014-04-10T09:38:38Z<p>S.boeck: added myself as onset detection captain</p>
<hr />
<div>Like ISMIR 2013, we are prepared to improve the distribution of tasks for the upcoming MIREX 2014. To do so, we really need leaders to help us organize and run each task.<br />
<br />
To volunteer to lead one or more tasks, please add your name in the "Captains" column.<br />
<br />
What does it mean to lead a task?<br />
* Update wiki pages as needed<br />
* Communicate with submitters and troubleshooting submissions<br />
* Execution and evaluation of submissions<br />
* Publishing final results<br />
<br />
Due to the proprietary nature of much of the data, the submission system, evaluation framework, and most of the datasets will continue to be hosted by IMIRSEL. However, we are prepared to provide access to task organizers to manage and run submissions on the IMIRSEL systems.<br />
<br />
<br />
{| class="wikitable" style="margin-left: 20px"<br />
!ID !! Task !! Captain(s)<br />
|-<br />
|abt<br />
|[[2014:Audio Beat Tracking]]<br />
|<br />
|-<br />
|ace<br />
|[[2014:Audio Chord Estimation]]<br />
|<br />
|-<br />
|act<br />
|[[2014:Audio Classification (Train/Test) Tasks]]<br />
|<br />
|-<br />
|acs<br />
|[[2014:Audio Cover Song Identification]]<br />
|<br />
|-<br />
|akd<br />
|[[2014:Audio Key Detection]]<br />
|<br />
|-<br />
|ame<br />
|[[2014:Audio Melody Extraction]]<br />
|<br />
|-<br />
|ams<br />
|[[2014:Audio Music Similarity and Retrieval]]<br />
|<br />
|-<br />
|aod<br />
|[[2014:Audio Onset Detection]]<br />
|Sebastian Böck<br />
|-<br />
|ate<br />
|[[2014:Audio Tempo Estimation]]<br />
|<br />
|-<br />
|atg<br />
|[[2014:Audio Tag Classification]]<br />
|<br />
|-<br />
|mf0<br />
|[[2014:Multiple Fundamental Frequency Estimation & Tracking]]<br />
|<br />
|-<br />
|qbsh<br />
|[[2014:Query by Singing/Humming]]<br />
|<br />
|-<br />
|qbt<br />
|[[2014:Query by Tapping]]<br />
|<br />
|-<br />
|scofo<br />
|[[2014:Real-time Audio to Score Alignment (a.k.a Score Following)]]<br />
|<br />
|-<br />
|sms<br />
|[[2014:Symbolic Melodic Similarity]]<br />
|<br />
|-<br />
|struct<br />
|[[2014:Structural Segmentation]]<br />
|<br />
|-<br />
|drts<br />
|[[2014:Discovery of Repeated Themes & Sections]]<br />
|<br />
|-<br />
|kgc<br />
|[[2014:Audio K-POP Genre Classification]]<br />
|IMIRSEL (Kahyun Choi, Peter Organisciak)<br />
|-<br />
|kmc<br />
|[[2014:Audio K-POP Mood Classification]]<br />
|IMIRSEL (Kahyun Choi, Peter Organisciak)<br />
|}</div>S.boeckhttps://www.music-ir.org/mirex/w/index.php?title=2014:Task_Captains&diff=99872014:Task Captains2014-04-07T19:38:57Z<p>S.boeck: </p>
<hr />
<div>Like ISMIR 2013, we are prepared to improve the distribution of tasks for the upcoming MIREX 2014. To do so, we really need leaders to help us organize and run each task.<br />
<br />
To volunteer to lead one or more tasks, please add your name in the "Captains" column.<br />
<br />
What does it mean to lead a task?<br />
* Update wiki pages as needed<br />
* Communicate with submitters and troubleshooting submissions<br />
* Execution and evaluation of submissions<br />
* Publishing final results<br />
<br />
Due to the proprietary nature of much of the data, the submission system, evaluation framework, and most of the datasets will continue to be hosted by IMIRSEL. However, we are prepared to provide access to task organizers to manage and run submissions on the IMIRSEL systems.<br />
<br />
<br />
{| class="wikitable" style="margin-left: 20px"<br />
!ID !! Task !! Captain(s)<br />
|-<br />
|abt<br />
|[[2014:Audio Beat Tracking]]<br />
|<br />
|-<br />
|ace<br />
|[[2014:Audio Chord Estimation]]<br />
|<br />
|-<br />
|act<br />
|[[2014:Audio Classification (Train/Test) Tasks]]<br />
|<br />
|-<br />
|acs<br />
|[[2014:Audio Cover Song Identification]]<br />
|<br />
|-<br />
|akd<br />
|[[2014:Audio Key Detection]]<br />
|<br />
|-<br />
|ame<br />
|[[2014:Audio Melody Extraction]]<br />
|KETI<br />
|-<br />
|ams<br />
|[[2014:Audio Music Similarity and Retrieval]]<br />
|<br />
|-<br />
|aod<br />
|[[2014:Audio Onset Detection]]<br />
|Sebastian Böck<br />
|-<br />
|ate<br />
|[[2014:Audio Tempo Estimation]]<br />
|<br />
|-<br />
|atg<br />
|[[2014:Audio Tag Classification]]<br />
|<br />
|-<br />
|mf0<br />
|[[2014:Multiple Fundamental Frequency Estimation & Tracking]]<br />
|<br />
|-<br />
|qbsh<br />
|[[2014:Query by Singing/Humming]]<br />
|KETI<br />
|-<br />
|qbt<br />
|[[2014:Query by Tapping]]<br />
|CCRMA<br />
|-<br />
|scofo<br />
|[[2014:Real-time Audio to Score Alignment (a.k.a Score Following)]]<br />
|<br />
|-<br />
|sms<br />
|[[2014:Symbolic Melodic Similarity]]<br />
|<br />
|-<br />
|struct<br />
|[[2014:Structural Segmentation]]<br />
|<br />
|-<br />
|drts<br />
|[[2014:Discovery of Repeated Themes & Sections]]<br />
|Tom Collins<br />
|-<br />
|kgc<br />
|[[2014:Audio K-POP Genre Classification]]<br />
|IMIRSEL (Kahyun Choi, Peter Organisciak)<br />
|-<br />
|kmc<br />
|[[2014:Audio K-POP Mood Classification]]<br />
|IMIRSEL (Kahyun Choi, Peter Organisciak)<br />
|}</div>S.boeckhttps://www.music-ir.org/mirex/w/index.php?title=2013:Task_Captains&diff=94072013:Task Captains2013-06-14T02:42:30Z<p>S.boeck: added myself as a possible task leader for onset detection</p>
<hr />
<div>In response to discussions at ISMIR 2012, we are prepared to improve the distribution of tasks for the upcoming MIREX 2013. To do so, we really need leaders to help us organize and run each task.<br />
<br />
To volunteer to lead one or more tasks, please add your name in the "Captains" column.<br />
<br />
What does it mean to lead a task?<br />
* Update wiki pages as needed<br />
* Communicate with submitters and troubleshooting submissions<br />
* Execution and evaluation of submissions<br />
* Publishing final results<br />
<br />
Due to the proprietary nature of much of the data, the submission system, evaluation framework, and most of the datasets will continue to be hosted by IMIRSEL. However, we are prepared to provide access to task organizers to manage and run submissions on the IMIRSEL systems.<br />
<br />
<br />
{| class="wikitable" style="margin-left: 20px"<br />
!ID !! Task !! Captain(s)<br />
|-<br />
|abt<br />
|[[2013:Audio Beat Tracking]]<br />
|<br />
|-<br />
|ace<br />
|[[2013:Audio Chord Estimation]]<br />
|<br />
|-<br />
|act<br />
|[[2013:Audio Classification (Train/Test) Tasks]]<br />
|IMIRSEL<br />
|-<br />
|acs<br />
|[[2013:Audio Cover Song Identification]]<br />
|<br />
|-<br />
|akd<br />
|[[2013:Audio Key Detection]]<br />
|<br />
|-<br />
|ame<br />
|[[2013:Audio Melody Extraction]]<br />
|KETI<br />
|-<br />
|ams<br />
|[[2013:Audio Music Similarity and Retrieval]]<br />
|IMIRSEL<br />
|-<br />
|aod<br />
|[[2013:Audio Onset Detection]]<br />
|Sebastian Böck<br />
|-<br />
|ate<br />
|[[2013:Audio Tempo Estimation]]<br />
|<br />
|-<br />
|atg<br />
|[[2013:Audio Tag Classification]]<br />
|<br />
|-<br />
|mf0<br />
|[[2013:Multiple Fundamental Frequency Estimation & Tracking]]<br />
|Mert Bay<br />
|-<br />
|qbsh<br />
|[[2013:Query by Singing/Humming]]<br />
|KETI<br />
|-<br />
|qbt<br />
|[[2013:Query by Tapping]]<br />
|KETI<br />
|-<br />
|scofo<br />
|[[2013:Real-time Audio to Score Alignment (a.k.a Score Following)]]<br />
|<br />
|-<br />
|sms<br />
|[[2013:Symbolic Melodic Similarity]]<br />
|IMIRSEL<br />
|-<br />
|struct<br />
|[[2013:Structural Segmentation]]<br />
|<br />
|-<br />
|drts<br />
|[[2013:Discovery of Repeated Themes & Sections]]<br />
|Tom Collins?<br />
|}</div>S.boeckhttps://www.music-ir.org/mirex/w/index.php?title=2011:MIREX_2011_Poster_List&diff=85442011:MIREX 2011 Poster List2011-10-27T15:24:13Z<p>S.boeck: /* Add your author names here, once for each poster along with "title of some sort" and (Task(s) covered) */</p>
<hr />
<div>==MIREX 2011 Poster Session Planning List==<br />
The MIREX 2011 Poster Session will be held Thursday, 27 October: 12:00-13:50. We will be holding the MIREX plenary meeting 11:00-12:00 on the same day.<br />
<br />
Our hosts in Miami need to know the number of posters so they can set up the room. Please add you name and the task(s) dealt with in your poster. <br />
<br />
We had many groups/individuals submit across tasks. You can choose to create one ISMIR poster bringing all your data together or can split up your data across, say, two or three posters. If you have questions, please contact me at jdownie@illinois.edu or the MIREX mailing list about task poster options.<br />
<br />
===Poster Guidelines===<br />
As in the past, MIREX will follow the ISMIR poster guidelines. All poster sessions, excluding the late-breaking/demo session on Friday, the 28th will be held in the Regency Ballroom. The dimension of a poster board is 36in by 48in (or 48in by 36in if used in landscape mode). A presenter will be responsible for removing his/her poster at the conclusion of its session. <br />
<br />
===Possible Printing Option===<br />
A FedEx/Kinko (4441 Collins Avenue) shop is near the conference hotel (4833 Collins Avenue) about four blocks away. The turnaround time is about four hours for walk-in orders. A google map shows the hotel and the shop can be found [http://maps.google.com/maps/ms?f=q&source=s_q&hl=en&geocode=&aq=&vpsrc=6&gl=us&ie=UTF8&hq=fedex+kinkos+miami+beach+collins&hnear=&t=m&msa=0&msid=207227323913999833158.0004af7ffbb36f333fa3b&sll=25.733107,-80.197449&sspn=0.376694,0.473785&ll=25.8172,-80.123162&spn=0.023527,0.029612&z=15 here].<br />
<br />
==Add your author names here, once for each poster along with "title of some sort" and (Task(s) covered)==<br />
# IMIRSEL: ''MIREX 2011 Overview, Part I'' (Train Test Tasks)<br />
# IMIRSEL: ''MIREX 2011 Overview, Part II'' (All Other Tasks)<br />
# Franz de Leon, Kirk Martinez: WAIS @ MIREX 2011 (AMS and Genre Classifacation Tasks)<br />
# Brian McFee and Gert Lanckriet: ''Audio similarity via metric learning'' (AMS)<br />
# J. Urbano, J. Lloréns, J. Morato and S. Sánchez-Cuadrado: ''MIREX 2011 Symbolic Melodic Similarity: Sequence Alignment with Geometric Representations''<br />
# E. Benetos and S. Dixon: ''Multiple-F0 estimation and note tracking using a convolutive probabilistic model'' (multiF0 and tracking)<br />
# C. de la Bandera, L.J. Tardón, I. Barbancho and S. Sammartino: ''Audio Key detection system based on probability density functions'' (Audio Key Detection)<br />
# S. Sammartino, L.J. Tardón, I. Barbancho and C. de la Bandera: ''Audio Music Similarity - timbre, rhythm and tone'' (Audio Music Similarity)<br />
# K. Seyerlehner, M. Schedl, P. Knees, R. Sonnleitner: ''A Refined Block-Level Feature Set for Classification, Similarity, and Tag Prediction'' (Audio Music Similarity, Genre Classification, Tag Prediction)<br />
# T. Pohle, D. Schnitzer: ''Audio Music Similarity Mirex Submission: PS1'' (Audio Music Similarity)<br />
# B. Martin, P. Hanna, M. Robine, J. Allali, P. Ferraro: ''Music Structure Analysis by Iterative Detection of Harmonic Pattern Repeats'' (Structural Segmentation)<br />
# J. Salamon, J. Zapata and E. Gómez: ''Melody Extraction and Tempo Estimation: MIREX 2011'' (Audio Melody Extraction, Audio Tempo Estimation)<br />
# Katherine Ellis, Emanuele Coviello and Gert Lanckriet. ''Semantic Annotation and Retrieval of Music Using a Bag of Systems Representation.'' (Audio Tag Classification)<br />
# Yu-Ren Chien, Hsin-Min Wang, and Shyh-Kang Jeng: ''An Acoustic-Phonetic Approach to Vocal Melody Extraction'' (Audio Melody Extraction)<br />
# Emanuele Coviello, Luke Barrington, Antoni B. Chan and Gert Lanckriet: ''Time series models for semantic music annotation'' (Audio Tag Classification)<br />
# Y. Ni, M. Mcvicar, R. Santos-Rodriguez and T. De Bie, ''Harmony Progression Analyzer for Chord Estimation'' (Audio Chord Estimation)<br />
# D. Tardieu, C. Charbuillet, F. Cornu, G. Peeters, ''GMM Supervector and ARV model for music tagging and similarity'' (Audio Music Similarity, Classification, Tag Prediction)<br />
# G. Sargent, S. A. Raczynski, F. Bimbot, E. Vincent, S. Sagayama, "A music structure inference algorithm based on symbolic data analysis" (Structural Segmentation)<br />
# D. J. Jang, C. J. Song, S. Shin, J. S. Lee, S. J. Park, S. J. Jang, S. P. Lee and K. H. Seo, "Query by singing/humming system based on the combination of DTW distances for MIREX 2011" (QbH)<br />
# S. J. Park, S.-P. Lee, G. P. Nam, T. T. Luong, K. R. Park, K. Kim, M. Y. Kim, J. Y. Yoon and H. Park, "Multiple classifiers-based Query by Singing/Humming system" (QbH)<br />
# Jea-Yul Yoon, Chai-Jong Song, Seok-Pil Lee and Hochong Park, "Extracting Predominant Melody of Polyphonic Music based on Harmoic Structure", (Audio Melody Extraction)<br />
# Philippe Hamel, " Pooled Features Classifier",(Audio Tag Classification, Train/Test tasks)<br />
# Teppo E. Ahonen, Kjell Lemström, Simo Linkola: ''Compressing Quantized Tonal Centroid Vectors for Cover Song Identification'', (Cover Song Identification)<br />
# Sebastian Böck, Bernhard Niedermayer: ''Onset Detection with Recurrent Neural Networks'', (Onset Detection)<br />
<br />
==Below are some examples from MIREX 2009==<br />
<br />
# Matt Hoffman: ''Using CBA to Automatically Tag Songs'' (Audio tag classification/retrieval)<br />
# Suman Ravuri, Dan Ellis: ''The Hydra System of Cover Song Classification'' (Cover Song Identification)<br />
# Joan Serra, Massimiliano Zanin, Ralph G Andrzejak: ''Cover song retrieval by recurrence quantification and unsupervised set detection'' (Cover Song Identification)<br />
# MTG Team: "Music Type Groupers (MTG): Generic Music Classification Algorithms" (Audio Genre Classification, Mood Classification, Artist Identification, Classical Composer Identification)<br />
# R. Jang: "Poster #2" (placeholder to get the auto-counter to increment)<br />
# R. Jang: "Poster #3" (placeholder to get the auto-counter to increment)</div>S.boeckhttps://www.music-ir.org/mirex/w/index.php?title=2010:Audio_Similarity_2010_Graders&diff=73372010:Audio Similarity 2010 Graders2010-07-18T10:11:46Z<p>S.boeck: /* Sign Up Area */</p>
<hr />
<div>=AMS 2010 Graders=<br />
<br />
Welcome to the AMS grader sign-up page. Please give us your name and email contact information. If you obscure your email, please make it relatively obvious to us how to parse the address.<br />
<br />
<b>Template:</b> Name. Location. <Email> <br />
<br />
<b>Sample:</b> J. Stephen Downie. Illinois, USA. <jdownie@illinois.edu><br />
<br />
=Special Comments=<br />
We are under extraordinary time constraints this year because ISMIR 2010 <br />
is being held starting 9 August and we need to have all the final <br />
results calculated and posted by our 2 August target date (fingers crossed).<br />
<br />
We hope to open the the Evalutron 6000 (E6K) v.2 grading system by 20 <br />
July. To meet our 2 August goal, we must have all the AMS and SMS <br />
similarity grades entered into the E6K by 27 July. YES, THIS GIVES US <br />
ONLY ONE WEEK! So, if you are kind enough to sign up to be a grader, <br />
please understand that we really need you complete your assigned grading <br />
by 27 July.<br />
<br />
If you are a SMS or AMS participant, we ask that you do what you can to <br />
encourage adults over 18 years of age to be graders.<br />
<br />
We are looking for <b>50</b> graders for AMS this year. If we make our quota of 50 graders, each grader will be responsible for two query lists. If we fall short, and get around 34 graders, we will be asking each grader to grade 3 queries. In this worst case scenario, we still expect the grading process to take between 2.5 to 3 hours (or less) for each grader.<br />
<br />
<br />
This year the query lists seem to be moderate in length so tackling two queries lists should not be too onerous. For safety's sake, we would like to see say, an extra five or so names on the sign up sheet below. The "extra names" on the sign up sheet will be considered "back up" graders. We will assign grading tasks in the order of the names as they appear below.<br />
<br />
=Sign Up Area=<br />
#Martin Ariel Hartmann. Buenos Aires, Argentina. <martin.hartmann@jyu.fi><br />
#Sally Jo Cunningham. Hamilton, New Zealand. <sallyjo@cs.waikato.ac.nz><br />
#Matt Hoffman. New York, USA. <mdhoffma@cs.princeton.edu><br />
#Thierry Bertin-Mahieux. New York, USA. <tb2332@columbia.edu><br />
#Dominik Schnitzer, Vienna, Austria <dominik.schnitzer (dumdidum) ofai.at><br />
#Ming Li, Beijing, China, <liming . ioa [AT] gmail dot com><br />
#Charlie Inskip, London UK <c.inskip@city.ac.uk><br />
#Ana M. Barbancho, Malaga, Spain < abp [AT] ic dot uma dot es><br />
#Arthur Flexer, Vienna, Austria <arthur.flexer [AT] ofai.at><br />
#Simone Sammartino, Málaga, Spain <ssammartino@ic.uma.es><br />
#Cristina de la Bandera Cascales, Málaga, Spain <cdelabandera@ic.uma.es><br />
#Peter Knees, Linz, Austria <peter.knees (AT) jku.at><br />
#Steve Tjoa. Maryland, USA. <kiemyang at umd edu><br />
#Michael Mandel, Montreal, QC <mandelm at iro umontreal ca><br />
#Perfecto Herrera, Barcelona, Spain <perfecto dot herrera [AT] upf dot es><br />
#Aggelos Gkiokas, Athens, Greece <agkiokas@ilsp.gr><br />
#Dmitry Bogdanov, Barcelona, Spain <dmitry dot bogdanov [AT] upf dot es><br />
#Ewa Lukasik, Poznan, Poland <ewa.lukasik@cs.put.poznan.pl><br />
#Tim Pohle, Linz, Austria <tim dot pohle at jku.at ><br />
#Tom Collins. Milton Keynes, UK. <t dot e dot collins at open dot ac dot uk><br />
#Julián Urbano, Madrid, Spain < jurbano at inf dot uc3m dot es ><br />
#Andrew Hankinson, Montreal, QC <andrew.hankinson@mail.mcgill.ca><br />
#Miranda Callahan, Mountain View, CA <mirandac at google dot com><br />
#Elizabeth German, Illinois, USA <egerman@illinois.edu><br />
#Till Crueger, Bonn, Germany <Till.Crueger@gmx.net><br />
#Randy Louque. Hamilton, New Zealand. <rclouque@yahoo.com><br />
#Megan Winget. Austin, Texas, USA <meganATischool.utexas.edu><br />
#Laurent Pugin, Geneva, Switzerland <lxpugin@gmail.com><br />
#Franz de Leon. Southampton, UK <fadl1d09@ecs.soton.ac.uk><br />
#Maria Theresa de Leon, Southampton, UK <mtdl1c09@ecs.soton.ac.uk><br />
#Sebastian Böck, Munich, Germany <sb @ minimoog.org></div>S.boeck