Difference between revisions of "2025:RenCon"

From MIREX Wiki
(overview)
 
Line 5: Line 5:
 
Here is a short summary of the important information:
 
Here is a short summary of the important information:
  
== Task Overview ==
+
 
RenCon challenges participants to develop systems that generate expressive musical audio renderings based on symbolic input (e.g., MIDI, MusicXML). Submissions are evaluated in two phases:
+
 
* Phase 1 – Preliminary Round (Online):
+
== Overview ==
Systems submit rendered audio (and optionally symbolic outputs) for both required and free-choice pieces. No real-time constraint is imposed.
+
 
* Phase 2 – Live Contest (Onsite at ISMIR):
+
Welcome to the official site for '''RenCon 2025''', an international challenge where researchers and developers submit systems capable of rendering expressive musical performances from symbolic scores. This year, we are delighted to hosted it along with '''ISMIR 2025''' and under the '''MIREX Tasks protocol'''.
Top systems will be invited to render a surprise piece using their system in real time at ISMIR. The audience will vote to determine the winner.
+
 
 +
The RenCon competition has a rich history, having record for compeition in 2002, 2003, 2004, 2005, 2008 and 2011 ([https://www.researchgate.net/publication/228715822_The_Second_Rencon_Performance_Contest_Panel_Discussion_and_the_Future 1], [https://www.nime.org/proceedings/2004/nime2004_120.pdf 2], [https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=e611d7c4f9df63d99d5760ca88a7107dca945e05 3], [https://www.researchgate.net/publication/228715816_Rencon_Performance_Rendering_Contest_for_Automated_Music_Systems 4], [http://smc.afim-asso.org/smc11/papers/smc2011_120.pdf 5]), jointly with conferences like SMC, NIME, ICMPC and IJCAI. Even before the term "AI" was widely used, the RenCon competition has been a platform for researchers to showcase their work in the field of expressive performance rendering.
 +
However, not too much of its past can be traced today, except an old [http://renconmusic.org site]. We hope to revive this tradition with RenCon 2025, coinciding with the renewed global focus on performance during this year's Chopin Piano Competition.
 +
 
 +
== Task Description ==
 +
 
 +
Expressive Performance Rendering is a task that challenges participants to develop systems capable of rendering expressive musical performances from symbolic scores in MusicXML format.
 +
 
 +
This year, we limit the task to be '''solo-piano''' specific. We accept system that generate symbolic (MIDI) or audio (wav) renderings, and the output shall contain human-like expressive deviation from the MusicXML score.  
 +
 
 +
Similar to AI song contest, the evaluation of expressive rendering is subjective and requires human judges to assess the quality of the generated performances. Thus, we propose a two-phase competition structure as shown in the next section, relying on audience voting to determine the winner.
 +
 
 +
== Competition Structure ==
 +
 
 +
RenCon 2025 is structured in two phases:
 +
 
 +
* '''Phase 1 – Preliminary Round (Online)''' 
 +
Submit performances of assigned and free-choice pieces. Includes symbolic or audio renderings and a technical report. No real-time constraints allow for broader participation and diversity in evaluation.
 +
The submission period is open from '''May 30, 2025''' to '''Aug 20, 2025''', following the MIREX submission guidelines.
 +
After the submission deadline, the preliminary round page will be finalized with the list of participants and their submissions, and the online evaluation will take place.
 +
 
 +
* '''Phase 2 – Live Contest at ISMIR (Daejeon, Korea)''' 
 +
Top systems from preliminary round will be invited to render a surprise piece live at ISMIR, using their system in real time.
 +
The live contest is open to all ISMIR attendees, as well as general public if the venue allows. The audience will be able to listen to the live performances and vote for their favorite system. The winner will be announced at the end of the conference.
  
 
== Timeline ==
 
== Timeline ==
* '''May 30 Aug 20, 2025''': Submission period for preliminary round
+
 
* '''Aug 26, 2025''': Preliminary round results released
+
* '''May 30, 2025''': Preliminary round submission opens, online judge sign-up opens
* '''Sept 10, 2025''': Online evaluation results + invitation to live contest
+
* '''Aug 20, 2025''': Preliminary round submission closes
* '''Sept 2025 (TBD)''': Live contest at ISMIR 2025
+
* '''Aug 25, 2025''': Preliminary round page finalized and online evaluation begins
 +
* '''Sept 10, 2025''': Online evaluation ends, results announced, and top systems invited to live contest
 +
* '''Sept 2*, 2025(TBD)''': Live contest at ISMIR 2025 in Daejeon, Korea
  
 
== Submission Requirements ==
 
== Submission Requirements ==
Each submission should include:
 
* Code and checkpoint of your system
 
* Symbolic inputs and audio outputs for:
 
* At least 2 out of 3 required pieces
 
* One free-choice piece (≤1 min 30 sec; max 4 pieces, 5 min total)
 
* A technical report, including dataset details and post-processing notes
 
  
 +
The following items are required for submission:
 +
 +
* Code and checkpoint of the system. 
 +
* Symbolic (MIDI) or audio (wav) renderings of designated pieces: 
 +
** Required pieces (choose 2 out of 4), click to download the MusicXML files:
 +
*** [https://ren-con2025.vercel.app/static/pieces/CAPRICCIO_en_sol_mineur_HWV_483_-_Handel.mxl Handel: Capriccio in G minor, HWV 483] 
 +
*** [https://ren-con2025.vercel.app/static/pieces/32_Variations_in_C_minor_WoO_80_First_5.mxl Beethoven: 32 Variations in C minor, WoO 80 - Theme and the first 5 variations] 
 +
*** [https://ren-con2025.vercel.app/static/pieces/12_Romances_Op.21__Sergei_Rachmaninoff_Zdes_khorosho_-_Arrangement_for_solo_piano.mxl Rachmaninoff: Здесь хорошо (How Fair this Spot), Op. 21, No. 7 (transcribed for solo piano)] 
 +
*** [https://ren-con2025.vercel.app/static/pieces/With_Dog-teams.mxl Amy Beach: Eskimos, Op.64, No.4 - With Dog-Teams] 
 +
** One free-choice piece (rendering must be less than 1 minute 30 seconds) 
 +
** A total of 3 pieces, with a maximum of 5 minutes of rendering in total.
 +
 +
* Technical report: Please use the template available at  [https://ren-con2025.vercel.app/static/RenCon%20Submission%20Report%20Template.zip Submission Report Template (ZIP)]
 +
 +
Final submission must be made through the [[2025_Submission_Guidelines|MIREX submission system]], with maximum zip file of size 5GB.
 +
 +
== Data Format of submission ==
 +
 +
'''Symbolic submissions''': MIDI format with all sound events in program number 1 (solo piano). Track and channel are unrestricted. 
 +
 +
'''Audio Submissions''': wav format, 44.1kHz, 16-bit PCM
 +
 +
== Training Datasets ==
 +
 +
Participants are welcome to train their systems on any dataset, including publicly available corpora, proprietary collections, or internally curated material. There are no restrictions on dataset origin, but we ask for full transparency.
 +
 +
Some suggested datasets for training and validation include:
 +
* '''[https://github.com/fosfrancesco/asap-dataset ASAP]''': A large dataset of classical piano performances sourced from MAESTRO, includes corresponding MIDI and audio. [https://github.com/CPJKU/asap-dataset (n)ASAP] provides score-performance alignment.
 +
* '''[https://github.com/tangjjbetsy/ATEPP ATEPP]''': A large dataset of transcribed MIDI expressive piano performances, organized by virtuosic performer. However, only around half of the dataset contains score MusicXML files.
 +
* '''[https://github.com/CPJKU/vienna4x22 VIENNA 4x22]''': A small-scale dataset of 4 pieces with 22 different interpretations, including audio and MIDI and fine alignment. () 
 +
* '''[https://github.com/huispaty/batik_plays_mozart Batik-plays-Mozart]''': Fine-aligned performance MIDI dataset of Mozart played by Roland Batik. 
  
== Evaluation ==
+
Please clearly describe the datasets used for training and validation in your technical report. Important details to include are:
* Online round: Human evaluation and technical inspection
 
* Live round: Real-time rendering, judged by ISMIR audience
 
* Objective metrics (e.g., alignment, timing, velocity) are used for analysis but not ranking
 
  
== Datasets ==
+
* Dataset name or source 
No restriction on datasets (public, private, internal allowed), but transparency is required.
+
* Size and number of pieces 
 +
* Instrumentation and expressive characteristics 
 +
* Data format (MIDI, audio, etc.)
 +
* Any preprocessing, cleaning, or augmentation steps applied 
 +
 
 +
This helps the jury and the research community understand the representational capacity and limitations of each submission.
  
 
== Post-Processing ==
 
== Post-Processing ==
Systems must report any post-processing:
 
* Symbolic-to-audio systems: Describe synthesizers, soundfonts, or piano models used
 
* Direct audio systems: Note EQ, compression, or other effects
 
* Interventions: If prompts or editing were used, clarify in your report
 
* MIDI cleanup: Describe if quantization, editing, or timing correction applied
 
  
Minimal human intervention is encouraged unless well-justified.
+
To ensure fair evaluation, all post-processing applied to the preliminary round output must be documented in the submission report. Depending on your system type, please include the following:
 +
 
 +
* '''Symbolic Output System''': If your model generates symbolic MIDI output and you submit the sonified audio track, describe how audio is derived. Include soundfont names, software synths used (e.g., FluidSynth, Logic Pro), or player piano models.
 +
** If you would like to submit the MIDI output directly and allow us (the organizer team) for sonification, please contact [mailto:huan.zhang@qmul.ac.uk huan.zhang@qmul.ac.uk] during your submission. It's likely that we would arrange a Disklavier recording in the Vienna office of Institute of Computational Perception (CPJKU) lab.
 +
* '''Audio Output Systems''': If your model outputs audio directly, describe if you have applied any enhancement steps such as EQ, reverb, compression, or noise reduction to the model's output.
 +
* '''Controllability or Interventions''': Clarify if the output is influenced by human-involved choices — such as selected tempo, dynamics range, segmentation, or annotated phrasing.
 +
* '''MIDI Cleanup''': If symbolic outputs were manually edited (quantization, pedals, etc) before submission, that should be documented.
  
 +
Submissions should aim for minimal human intervention. Manual correction is allowed only if it is well-documented and justified in the report.
  
 
== Organizers ==
 
== Organizers ==
* Huan Zhang (Task Captain, QMUL)
+
* Huan Zhang (Task Captain, Queen Mary University of London)
* Junyan Jiang (NYU)
+
* Junyan Jiang (New York University )
* Simon Dixon (QMUL)
+
* Simon Dixon (Queen Mary University of London)
 
* Gus Xia (MBZUAI)
 
* Gus Xia (MBZUAI)
 
* Akira Maezawa (Yamaha)
 
* Akira Maezawa (Yamaha)
  
Contact: huan.zhang@qmul.ac.uk
+
Contact: [mailto:huan.zhang@qmul.ac.uk huan.zhang@qmul.ac.uk]

Latest revision as of 17:34, 21 May 2025

RenCon 2025: Expressive Performance Rendering Competition

Welcome to the official MIREX wiki page for RenCon 2025, a new task on expressive musical rendering from symbolic scores. This task is part of the MIREX 2025 evaluation campaign and will culminate with a live contest at ISMIR 2025 in Daejeon, Korea. For updates, templates, and examples, please visit our official site: https://ren-con2025.vercel.app

Here is a short summary of the important information:


Overview

Welcome to the official site for RenCon 2025, an international challenge where researchers and developers submit systems capable of rendering expressive musical performances from symbolic scores. This year, we are delighted to hosted it along with ISMIR 2025 and under the MIREX Tasks protocol.

The RenCon competition has a rich history, having record for compeition in 2002, 2003, 2004, 2005, 2008 and 2011 (1, 2, 3, 4, 5), jointly with conferences like SMC, NIME, ICMPC and IJCAI. Even before the term "AI" was widely used, the RenCon competition has been a platform for researchers to showcase their work in the field of expressive performance rendering. However, not too much of its past can be traced today, except an old site. We hope to revive this tradition with RenCon 2025, coinciding with the renewed global focus on performance during this year's Chopin Piano Competition.

Task Description

Expressive Performance Rendering is a task that challenges participants to develop systems capable of rendering expressive musical performances from symbolic scores in MusicXML format.

This year, we limit the task to be solo-piano specific. We accept system that generate symbolic (MIDI) or audio (wav) renderings, and the output shall contain human-like expressive deviation from the MusicXML score.

Similar to AI song contest, the evaluation of expressive rendering is subjective and requires human judges to assess the quality of the generated performances. Thus, we propose a two-phase competition structure as shown in the next section, relying on audience voting to determine the winner.

Competition Structure

RenCon 2025 is structured in two phases:

  • Phase 1 – Preliminary Round (Online)

Submit performances of assigned and free-choice pieces. Includes symbolic or audio renderings and a technical report. No real-time constraints allow for broader participation and diversity in evaluation. The submission period is open from May 30, 2025 to Aug 20, 2025, following the MIREX submission guidelines. After the submission deadline, the preliminary round page will be finalized with the list of participants and their submissions, and the online evaluation will take place.

  • Phase 2 – Live Contest at ISMIR (Daejeon, Korea)

Top systems from preliminary round will be invited to render a surprise piece live at ISMIR, using their system in real time. The live contest is open to all ISMIR attendees, as well as general public if the venue allows. The audience will be able to listen to the live performances and vote for their favorite system. The winner will be announced at the end of the conference.

Timeline

  • May 30, 2025: Preliminary round submission opens, online judge sign-up opens
  • Aug 20, 2025: Preliminary round submission closes
  • Aug 25, 2025: Preliminary round page finalized and online evaluation begins
  • Sept 10, 2025: Online evaluation ends, results announced, and top systems invited to live contest
  • Sept 2*, 2025(TBD): Live contest at ISMIR 2025 in Daejeon, Korea

Submission Requirements

The following items are required for submission:

Final submission must be made through the MIREX submission system, with maximum zip file of size 5GB.

Data Format of submission

Symbolic submissions: MIDI format with all sound events in program number 1 (solo piano). Track and channel are unrestricted.

Audio Submissions: wav format, 44.1kHz, 16-bit PCM

Training Datasets

Participants are welcome to train their systems on any dataset, including publicly available corpora, proprietary collections, or internally curated material. There are no restrictions on dataset origin, but we ask for full transparency.

Some suggested datasets for training and validation include:

  • ASAP: A large dataset of classical piano performances sourced from MAESTRO, includes corresponding MIDI and audio. (n)ASAP provides score-performance alignment.
  • ATEPP: A large dataset of transcribed MIDI expressive piano performances, organized by virtuosic performer. However, only around half of the dataset contains score MusicXML files.
  • VIENNA 4x22: A small-scale dataset of 4 pieces with 22 different interpretations, including audio and MIDI and fine alignment. ()
  • Batik-plays-Mozart: Fine-aligned performance MIDI dataset of Mozart played by Roland Batik.

Please clearly describe the datasets used for training and validation in your technical report. Important details to include are:

  • Dataset name or source
  • Size and number of pieces
  • Instrumentation and expressive characteristics
  • Data format (MIDI, audio, etc.)
  • Any preprocessing, cleaning, or augmentation steps applied

This helps the jury and the research community understand the representational capacity and limitations of each submission.

Post-Processing

To ensure fair evaluation, all post-processing applied to the preliminary round output must be documented in the submission report. Depending on your system type, please include the following:

  • Symbolic Output System: If your model generates symbolic MIDI output and you submit the sonified audio track, describe how audio is derived. Include soundfont names, software synths used (e.g., FluidSynth, Logic Pro), or player piano models.
    • If you would like to submit the MIDI output directly and allow us (the organizer team) for sonification, please contact huan.zhang@qmul.ac.uk during your submission. It's likely that we would arrange a Disklavier recording in the Vienna office of Institute of Computational Perception (CPJKU) lab.
  • Audio Output Systems: If your model outputs audio directly, describe if you have applied any enhancement steps such as EQ, reverb, compression, or noise reduction to the model's output.
  • Controllability or Interventions: Clarify if the output is influenced by human-involved choices — such as selected tempo, dynamics range, segmentation, or annotated phrasing.
  • MIDI Cleanup: If symbolic outputs were manually edited (quantization, pedals, etc) before submission, that should be documented.

Submissions should aim for minimal human intervention. Manual correction is allowed only if it is well-documented and justified in the report.

Organizers

  • Huan Zhang (Task Captain, Queen Mary University of London)
  • Junyan Jiang (New York University )
  • Simon Dixon (Queen Mary University of London)
  • Gus Xia (MBZUAI)
  • Akira Maezawa (Yamaha)

Contact: huan.zhang@qmul.ac.uk