2006:Audio Melody Extraction
From MIREX Wiki
Results are on 2006:Audio Melody Extraction Results page.
To extract the melody line from polyphonic audio.
The aim of the MIREX audio melody extraction evaluation is to identify the melody pitch contour from polyphonic musical audio. The task consists of two parts: Voicing detection (deciding whether a particular time frame contains a "melody pitch" or not), and pitch detection (deciding the most likely melody pitch for each time frame). We structure the submission to allow these parts to be done independently, i.e. it is possible (via a negative pitch value) to guess a pitch even for frames that were being judged unvoiced. Algorithms which don't perform a discrimination between melodic and non-melodic parts are also welcome!
(The audio melody extraction evaluation will be essentially a re-run of last years contest i.e. the same test data is used.)
- 25 phrase excerpts of 10-40 sec from the following genres: Rock, R&B, Pop, Jazz, Solo classical piano
- CD-quality (PCM, 16-bit, 44100 Hz)
- single channel (mono)
- manually annotated reference data (10 ms time grid)
- In order to allow for generalization among potential approaches (i.e. frame size, hop size, etc), submitted algorithms should output pitch estimates, in Hz, at discrete instants in time
- so the output file successively contains the time stamp [space or tab] the corresponding frequency value [new line]
- the time grid of the reference file is 10 ms, yet the submission may use a different time grid as output (for example 5.8 ms)
- Instants which are identified unvoiced (there is no dominant melody) can either be scored as 0 Hz or as a negative pitch value. If negative pitch values are given the statistics for Raw Pitch Accuracy and Raw Chroma Accuracy may be improved.
Relevant Test Collections
- For the ISMIR 2004 Audio Description Contest, the Music Technology Group of the Pompeu Fabra University assembled a diverse of audio segments and corresponding melody transcriptions including audio excerpts from such genres as Rock, R&B, Pop, Jazz, Opera, and MIDI. (full test set with the reference transcriptions (28.6 MB))
- Graham's collection: you find the test set here and further explanations on the pages http://www.ee.columbia.edu/~graham/mirex_melody/and http://labrosa.ee.columbia.edu/projects/melody/
ATTENTION! The timing offset (and time grids) of the test collections vary. Use Graham's collection to adjust the timing offset of your algorithm
The following researchers have confirmed their interest in participating:
- Karin Dressler (Fraunhofer IDMT) - email@example.com - Likely
- Matti Ryyn├ñnen and Anssi Klapuri (Tampere University of Technology) - firstname.lastname@example.org - Likely
- Graham Poliner and Dan Ellis (Columbia University)
- Paul Brossier (Queen Mary, University of London)
Additional potential participants include:
- Emmanuel Vincent (Queen Mary, University of London)
- Rui Pedro Paiva (University of Coimbra)
- Matija Marolt (University of Ljubljana)
- Masataka Goto (AIST)
In the absence of better suggestions, I propose that we re-run the 2005 audio melody extraction in 2006 i.e. use the same test data at least. We may subsequently come up with some improved metrics (particularly with a view to significance testing) but basically the results will be directly comparable with last year. --Dpwe 15:28, 23 March 2006 (CST)
published and "secret" test data:
I would find it very usefull if melody extraction results are computed for two different data sets - one published (for example the ISMIR2004 data set), one "secret" (the MIREX2005 data set). This way the new results will be comparable with last year's results, and at the same time we gain a better insight into the performance of the algorithms in very specific situations, because we have access to the music files. I think this knowledge would be very valuable for the further development of the algorithms.