2025:RenCon
Contents
RenCon 2025: Expressive Performance Rendering Competition
Welcome to the official MIREX wiki page for RenCon 2025, a new task on expressive musical rendering from symbolic scores. This task is part of the MIREX 2025 evaluation campaign and will culminate with a live contest at ISMIR 2025 in Daejeon, Korea. For updates, templates, and examples, please visit our official site: https://ren-con2025.vercel.app
Here is a short summary of the important information:
Overview
Welcome to the official site for RenCon 2025, an international challenge where researchers and developers submit systems capable of rendering expressive musical performances from symbolic scores. This year, we are delighted to hosted it along with ISMIR 2025 and under the MIREX Tasks protocol.
The RenCon competition has a rich history, having record for compeition in 2002, 2003, 2004, 2005, 2008 and 2011 (1, 2, 3, 4, 5), jointly with conferences like SMC, NIME, ICMPC and IJCAI. Even before the term "AI" was widely used, the RenCon competition has been a platform for researchers to showcase their work in the field of expressive performance rendering. However, not too much of its past can be traced today, except an old site. We hope to revive this tradition with RenCon 2025, coinciding with the renewed global focus on performance during this year's Chopin Piano Competition.
Task Description
Expressive Performance Rendering is a task that challenges participants to develop systems capable of rendering expressive musical performances from symbolic scores in MusicXML format.
This year, we limit the task to be solo-piano specific. We accept system that generate symbolic (MIDI) or audio (wav) renderings, and the output shall contain human-like expressive deviation from the MusicXML score.
Similar to AI song contest, the evaluation of expressive rendering is subjective and requires human judges to assess the quality of the generated performances. Thus, we propose a two-phase competition structure as shown in the next section, relying on audience voting to determine the winner.
Competition Structure
RenCon 2025 is structured in two phases:
- Phase 1 – Preliminary Round (Online)
Submit performances of assigned and free-choice pieces. Includes symbolic or audio renderings and a technical report. No real-time constraints allow for broader participation and diversity in evaluation. The submission period is open from May 30, 2025 to Aug 20, 2025, following the MIREX submission guidelines. After the submission deadline, the preliminary round page will be finalized with the list of participants and their submissions, and the online evaluation will take place.
- Phase 2 – Live Contest at ISMIR (Daejeon, Korea)
Top systems from preliminary round will be invited to render a surprise piece live at ISMIR, using their system in real time. The live contest is open to all ISMIR attendees, as well as general public if the venue allows. The audience will be able to listen to the live performances and vote for their favorite system. The winner will be announced at the end of the conference.
Timeline
- May 30, 2025: Preliminary round submission opens, online judge sign-up opens
- Aug 20, 2025: Preliminary round submission closes
- Aug 25, 2025: Preliminary round page finalized and online evaluation begins
- Sept 10, 2025: Online evaluation ends, results announced, and top systems invited to live contest
- Sept 2*, 2025(TBD): Live contest at ISMIR 2025 in Daejeon, Korea
Submission Requirements
The following items are required for submission:
- Code and checkpoint of the system.
- Symbolic (MIDI) or audio (wav) renderings of designated pieces:
- Required pieces (choose 2 out of 4), click to download the MusicXML files:
- One free-choice piece (rendering must be less than 1 minute 30 seconds)
- A total of 3 pieces, with a maximum of 5 minutes of rendering in total.
- Technical report: Please use the template available at Submission Report Template (ZIP)
Final submission must be made through the MIREX submission system, with maximum zip file of size 5GB.
Data Format of submission
Symbolic submissions: MIDI format with all sound events in program number 1 (solo piano). Track and channel are unrestricted.
Audio Submissions: wav format, 44.1kHz, 16-bit PCM
Training Datasets
Participants are welcome to train their systems on any dataset, including publicly available corpora, proprietary collections, or internally curated material. There are no restrictions on dataset origin, but we ask for full transparency.
Some suggested datasets for training and validation include:
- ASAP: A large dataset of classical piano performances sourced from MAESTRO, includes corresponding MIDI and audio. (n)ASAP provides score-performance alignment.
- ATEPP: A large dataset of transcribed MIDI expressive piano performances, organized by virtuosic performer. However, only around half of the dataset contains score MusicXML files.
- VIENNA 4x22: A small-scale dataset of 4 pieces with 22 different interpretations, including audio and MIDI and fine alignment. ()
- Batik-plays-Mozart: Fine-aligned performance MIDI dataset of Mozart played by Roland Batik.
Please clearly describe the datasets used for training and validation in your technical report. Important details to include are:
- Dataset name or source
- Size and number of pieces
- Instrumentation and expressive characteristics
- Data format (MIDI, audio, etc.)
- Any preprocessing, cleaning, or augmentation steps applied
This helps the jury and the research community understand the representational capacity and limitations of each submission.
Post-Processing
To ensure fair evaluation, all post-processing applied to the preliminary round output must be documented in the submission report. Depending on your system type, please include the following:
- Symbolic Output System: If your model generates symbolic MIDI output and you submit the sonified audio track, describe how audio is derived. Include soundfont names, software synths used (e.g., FluidSynth, Logic Pro), or player piano models.
- If you would like to submit the MIDI output directly and allow us (the organizer team) for sonification, please contact huan.zhang@qmul.ac.uk during your submission. It's likely that we would arrange a Disklavier recording in the Vienna office of Institute of Computational Perception (CPJKU) lab.
- Audio Output Systems: If your model outputs audio directly, describe if you have applied any enhancement steps such as EQ, reverb, compression, or noise reduction to the model's output.
- Controllability or Interventions: Clarify if the output is influenced by human-involved choices — such as selected tempo, dynamics range, segmentation, or annotated phrasing.
- MIDI Cleanup: If symbolic outputs were manually edited (quantization, pedals, etc) before submission, that should be documented.
Submissions should aim for minimal human intervention. Manual correction is allowed only if it is well-documented and justified in the report.
Organizers
- Huan Zhang (Task Captain, Queen Mary University of London)
- Junyan Jiang (New York University )
- Simon Dixon (Queen Mary University of London)
- Gus Xia (MBZUAI)
- Akira Maezawa (Yamaha)
Contact: huan.zhang@qmul.ac.uk