2016:Singing Voice Separation
Contents
Description
The singing voice separation task solicits competing entries to blindly separate the singer's voice from pop music recordings. The entries are evaluated using standard metrics (see Evaluation below).
Task specific mailing list
All discussions take place on the MIREX "EvalFest" list. If you have an question or comment, simply include the task name in the subject heading.
Data
A collection of 100 clips of recorded pop music (vocals plus music) are used to evaluate the singing voice separation algorithms (these are the hidden parts of the iKala dataset). If your algorithm is a supervised one, you are welcome to use the public part of the iKala dataset for training. In addition, you can train with 30-second segments from the SiSEC MUS challenge.
Collection statistics:
- Size of collection: 100 clips
- Audio details: 16-bit, mono, 44.1kHz, WAV
- Duration of each clip: 30 seconds
For more information about the iKala dataset, see T.-S. Chan, T.-C. Yeh, Z.-C. Fan, H.-W. Chen, L. Su, Y.-H. Yang, and R. Jang, "Vocal activity informed singing voice separation with the iKala dataset," in Proc. IEEE Int. Conf. Acoust., Speech and Signal Process., 2015, pp. 718-722.
This is a comment by Marius Miron: how is it different the hidden dataset from the known one. Supervised approaches need to know this in order to know which transformations should include into the training part. Does it have different tempo, voices, timbres, genre, audio amplitudes? What are the factors that you need to make your algorithm robust to?
Evaluation
For evaluation we use Vincent et al.'s (2012) Source to Distortion Ratio (SDR), Source to Interferences Ratio (SIR), and Sources to Artifacts Ratio (SAR), as implemented by bss_eval_sources.m in BSS Eval Version 3.0 (but with the permutation part removed, as the ability to classify signals is part of the singing voice separation challenge). Everything will be normalized to enable a fairer evaluation. More specifically, their function will be invoked as follows:
>> trueVoice = wavread('trueVoice.wav'); >> trueKaraoke = wavread('trueKaraoke.wav'); >> trueMixed = trueVoice + trueKaraoke; >> [estimatedVoice, estimatedKaraoke] = wrapper_function_calling_your_separation_algorithm(trueMixed); >> [SDR, SIR, SAR] = bss_eval_sources([estimatedVoice estimatedKaraoke]' / norm(estimatedVoice + estimatedKaraoke), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke)); >> [NSDR, NSIR, NSAR] = bss_eval_sources([trueMixed trueMixed]' / norm(trueMixed + trueMixed), [trueVoice trueKaraoke]' / norm(trueVoice + trueKaraoke)); >> NSDR = SDR - NSDR; >> NSIR = SIR - NSIR; >> NSAR = SAR - NSAR;
The final scores will be determined by the mean over all 100 clips (note that GSIR and GSAR are not normalized):
,
,
.
In addition, sd, min, max and median will also be reported.
Remark. You may assume that trueMixed is always in the range of [-1,1].
Submission format
Participants are required to submit an entry that takes in an input filename (full native pathname ending in *.wav) and an output directory as arguments. The entries must send their voice-separated outputs to *-voice.wav and *-music.wav under the output directory. For example:
function singing_voice_separation(infile, outdir) [~, name, ext] = fileparts(infile); your_algorithm(infile, fullfile(outdir, [name '-voice' ext]), fullfile(outdir, [name '-music' ext])); function your_algorithm(infile, voiceoutfile, musicoutfile) mixed = wavread(infile); % Insert your algorithm here wavwrite(voice, 44100, voiceoutfile); wavwrite(music, 44100, musicoutfile);
If scratch space is required, please use the three-argument format instead:
function singing_voice_separation(infile, outdir, tmpdir)
Following the convention of other MIREX tasks, an extended abstract is also required (see MIREX 2015 Submission Instructions below). For supervised submissions, please provide training details in the extended abstract (e.g. datasets used).
Packaging submissions
All submissions should be statically linked to all libraries (the presence of dynamically linked libraries cannot be guaranteed).
- Be sure to follow the Best Coding Practices for MIREX.
- Be sure to follow the MIREX 2016 Submission Instructions. For example, under Very Important Things to Note, Clause 6 states that if you plan to submit more than one algorithm or algorithm variant to a given task, EACH algorithm or variant needs its own complete submission to be made including the README and binary bundle upload. Each package will be given its own unique identifier. Tell us in the README the priority of a given algorithm in case we have to limit a task to only one or two algorithms/variants per submitter/team. [Note: our current limit is two entries per team.]
All submissions should include a README file including the following the information:
- Command line calling format for all executables and an example formatted set of commands
- Number of threads/cores used or whether this should be specified on the command line
- Expected memory footprint
- Expected runtime
- Approximately how much scratch disk space will the submission need to store any feature/cache files?
- Any required environments/architectures (and versions), e.g. python, java, bash, matlab.
- Any special notice regarding to running your algorithm
Note that the information that you place in the README file is extremely important in ensuring that your submission is evaluated properly.
Time and hardware limits
Due to the potentially high number of particpants in this and other audio tasks, hard limits on the runtime of submissions are specified.
A hard limit of 24 hours will be imposed on runs. Submissions that exceed this runtime may not receive a result.
Potential Participants
name / email