https://www.music-ir.org/mirex/w/api.php?action=feedcontributions&user=Fjrodrig&feedformat=atomMIREX Wiki - User contributions [en]2024-03-29T14:37:15ZUser contributionsMediaWiki 1.31.1https://www.music-ir.org/mirex/w/index.php?title=2014:Real-time_Audio_to_Score_Alignment_(a.k.a_Score_Following)&diff=100762014:Real-time Audio to Score Alignment (a.k.a Score Following)2014-06-09T09:45:24Z<p>Fjrodrig: /* Potential Participants */</p>
<hr />
<div>''Real-time Audio to Score Alignment'', also known as ''Score Following''<br />
<br />
== Description ==<br />
Score Following is the real-time alignment of an incoming music signal to the music score. The music signal can be symbolic (MIDI) or audio, but we will concentrate here on audio following, unless there are some candidates who'd want their symbolic followers to be evaluated and can propose reference data. <br />
<br />
This page describes a proposal for evaluation of score following systems.<br />
<br />
<br />
Submissions will be required to estimate alignment precision according to the indexed times. In order for your system to participate, please specify the type of alignment (monophonic, polyphonic), type of training and realtime performance, also separated into two domains (upon enough submissions) for symbolic and audio systems. Note that we also do accept systems that don't run in real-time in practice, as soon as their algorithm is on-line, i.e. without making use of global knowledge of the input.<br />
<br />
<br />
=== Task specific mailing list ===<br />
In the past we have use a specific mailing list for the discussion of this task and related tasks. This year, however, we are asking that all discussions take place on the MIREX [https://mail.lis.illinois.edu/mailman/listinfo/evalfest "EvalFest" list]. If you have an question or comment, simply include the task name in the subject heading.<br />
<br />
== Data == <br />
46 recordings and their corresponding MIDI representations of the score will be used in the evaluation. These 46 excerpts were extracted from 4 distinct musical pieces.<br />
Recordings are in 44.1Khz 16bit wav format. The reference scores are in MIDI format.<br />
<br />
Zhiyao Duan and Prof. Bryan Pardo contributed another polyphonic dataset. This dataset consists of 10 pieces of four-part J.S. Bach chorales. The audio file was performed by a quartet of instruments: violin, clarinet, saxophone and bassoon. The ground-truth alignment between audio and midi were generated by human annotation.<br />
<br />
Andreas Arzt contributed a heavily polyphonic dataset consisting of 3 piano performances of the Prelude in G minor op. 23-5 by Sergei Rachmaninoff. The 3 performances (by Ashkenazy, Gavrilov and Shelley) differ heavily in their style of interpretation. The ground truth data was compiled by extensive manual correction of off-line alignments. ''Due to an oversight this data was not used for the evaluation runs.''<br />
<br />
== Evaluation procedures ==<br />
<br />
Evaluation procedure consists of running score followers on a database of aligned audio to score where the database contains score, and performance audio (for system call) and a reference alignment (for evaluations) -- <br />
See http://ismir2007.ismir.net/proceedings/ISMIR2007_p315_cont.pdf for details.<br />
<br />
See the details of 2006 proposal on the [[2006:Score_Following_Proposal|MIREX 2006 Wiki]]<br />
<br />
<br />
=== I/O Format ===<br />
Each system should conform to the following format:<br />
<br />
''doScofo.sh "/path/to/audiofile.wav" "/path/to/midi_score_file.mid" "/path/to/result/filename.txt" <br />
<br />
The stdout and stderr will be logged.<br />
<br />
"/path/to/result/filenam.txt" should be have one line per detected note with the following 4 columns<br />
<br />
1. estimated note onset time in performance audio file (ms)<br />
2. detection time relative to performance audio file (ms)<br />
3. note start time in score (ms)<br />
4. MIDI note number in score (int) <br />
<br />
Example :<br />
''1800 1800 0 75''<br />
''2021 2022 187.5 73''<br />
''... ... ... ...''<br />
<br />
Remarks: The third column with the detected note's start time in score serves as the unique identifier of a note (or chord for polyphonic scores) that links it to the ground truth onset of that note within the reference alignment files. The fourth column of MIDI note number is there only for your convenience, to know your way around in the result files, if you know the melody in MIDI.<br />
<br />
<br />
=== Packaging submissions ===<br />
All submissions should be statically linked to all libraries (the presence of <br />
dynamically linked libraries cannot be guarenteed).<br />
<br />
All submissions should include a README file including the following the <br />
information:<br />
<br />
* Command line calling format for all executables and an example formatted set of commands<br />
* Number of threads/cores used or whether this should be specified on the command line<br />
* Expected memory footprint<br />
* Expected runtime<br />
* Any required environments (and versions), e.g. python, java, bash, matlab.<br />
<br />
== Time and hardware limits ==<br />
Due to the potentially high number of particpants in this and other audio tasks,<br />
hard limits on the runtime of submissions are specified. <br />
<br />
A hard limit of 12 hours will be imposed on rthe total runtime of algorithms. Submissions that exceed this runtime may not receive a result.<br />
<br />
== Potential Participants ==<br />
Francisco J. Rodriguez-Serrano, Pedro Vera-Candeas / fjrodrig@ujaen.es<br />
<br />
name / email</div>Fjrodrighttps://www.music-ir.org/mirex/w/index.php?title=2010:MIREX_2010_Poster_List&diff=75522010:MIREX 2010 Poster List2010-07-31T11:39:21Z<p>Fjrodrig: /* Add your author names here, once for each poster along with "title of some sort" and (Task(s) covered) */</p>
<hr />
<div>==MIREX 2010 Poster Session Planning List==<br />
The MIREX 2010 Poster Session will be held Wednesday, 11 August: 16:00 - 17:45. We will be holding the MIREX plenary meeting 13:00-14:00 as a working lunch on the same day.<br />
<br />
Our hosts in Utrecht need to know the number of posters so they can set up the room. Please add you name and the task(s) dealt with in your poster. <br />
<br />
We had many groups/individuals submit across tasks. You can choose to create one ISMIR poster bringing all your data together or can split up your data across, say, two or three posters. If you have questions, please contact me at jdownie@illinois.edu or the MIREX mailing list about task poster options.<br />
<br />
As a reminder, the MIREX posters need to follow the [http://ismir2010.ismir.net/information-for-authors/information-for-presenters/ ISMIR 2010 poster guidelines] (i.e., A0, portrait orientation).<br />
<br />
==Add your author names here, once for each poster along with "title of some sort" and (Task(s) covered)==<br />
# IMIRSEL: ''MIREX 2010 Overview, Part I'' (Train Test Tasks)<br />
# IMIRSEL: ''MIREX 2010 Overview, Part II'' (All Other Tasks)<br />
# Andreas Arzt and Gerhard Widmer: "Real-time Music Tracking using Tempo-aware On-line Dynamic Time Warping" (Real-time Audio to Score Alignment (a.k.a Score Following))<br />
# Pasi Saari and Olivier Lartillot: "SubEnsemble - Classification framework based on the Ensemble Approach and Feature Selection" (Train Test Tasks)<br />
# Gabriel Sargent, Frédéric Bimbot and Emmanuel Vincent: "Structural segmentation of songs using multi-criteria generalized likelihood ratio and regularity constraints" (Structural Segmentation Task)<br />
# Emmanouil Benetos and Simon Dixon: "Multiple fundamental frequency estimation using spectral structure and temporal evolution rules" (Multiple Fundamental Frequency Estimation & Tracking Task)<br />
# J. Urbano, J. Lloréns, J. Morato and S. Sánchez-Cuadrado: ''Local Alignment with Geometric Representations'' (Symbolic Melodic Similarity)<br />
# F.J.Rodriguez-Serrano, P.Vera-Candeas, P.Cabanas-Molero, J.J.Carabias-Orti, N.Ruiz-Reyes: ''AM Sinusoidal Modeling for Onset Detection FOR ONSET DETECTION'' (Audio Onset Detection)<br />
# R.Mata-Campos, F.J.Rodriguez-Serrano, P.Vera-Candeas, J.J.Carabias-Orti, F.J.Canadas-Quesada: ''Beat Tracking improved by AM Sinusoidal Modeled Onsets'' (Audio Beat Tracking)<br />
# F.J.Rodriguez-Serrano, P.Vera-Candeas, J.J.Carabias-Orti,P.Cabanas-Molero, N.Ruiz-Reyes: ''Real time audio to score alignment based on NLS multipitch estimation'' (Real-time Audio to Score Alignment (a.k.a Score Following))<br />
# F.J. Cañadas-Quesada, F. Rodríguez-Serrano, P. Vera-Candeas, N. Ruiz-Reyes and J. Carabias-Orti: ''Multiple Fundamental Frequency Estimation & Tracking in Polyphonic Music for MIREX 2010'' (Multiple Fundamental Frequency Estimation & Tracking)<br />
<br />
==Below are some examples from MIREX 2009==<br />
<br />
# Matt Hoffman: ''Using CBA to Automatically Tag Songs'' (Audio tag classification/retrieval)<br />
# Suman Ravuri, Dan Ellis: ''The Hydra System of Cover Song Classification'' (Cover Song Identification)<br />
# Joan Serra, Massimiliano Zanin, Ralph G Andrzejak: ''Cover song retrieval by recurrence quantification and unsupervised set detection'' (Cover Song Identification)<br />
# MTG Team: "Music Type Groupers (MTG): Generic Music Classification Algorithms" (Audio Genre Classification, Mood Classification, Artist Identification, Classical Composer Identification)<br />
# R. Jang: "Poster #2" (placeholder to get the auto-counter to increment)<br />
# R. Jang: "Poster #3" (placeholder to get the auto-counter to increment)</div>Fjrodrig