Difference between revisions of "2018:Drum Transcription"

From MIREX Wiki
(Submission closing date)
(update for 2018)
Line 7: Line 7:
 
Only the three main drum instruments of drum kits for western pop music are considered.
 
Only the three main drum instruments of drum kits for western pop music are considered.
 
These are: bass drum, snare drum, and hi-hat (in all variations like open, closed, pedal, etc.).
 
These are: bass drum, snare drum, and hi-hat (in all variations like open, closed, pedal, etc.).
 +
 +
In addition to the classic three drum instrument task, we will also run a 8 drum instrument classes task, this year.
 +
Separate training and evaluation data will be used for this task.
  
  
 
==Data==
 
==Data==
  
For evaluation 6 (5) different datasets will be used.
+
For evaluation 5 different datasets will be used.
By the time the evaluation is run, we hope to have the three datasets from the 2005 drum detection MIREX task as a baseline.
+
We use two of the three datasets from the 2005 drum detection MIREX task as a baseline.
Currently we only have the set provided by Christian Dittmar and Koen Tanghe.
 
  
 
* CD set
 
* CD set
 
* KT set
 
* KT set
* (GM set)
 
  
 
Additionally three new datasets will be used. They contain polyphonic music of different genres, as well as drum only tracks, and some tracks without drums:
 
Additionally three new datasets will be used. They contain polyphonic music of different genres, as well as drum only tracks, and some tracks without drums:
Line 120: Line 121:
  
 
Annotation files for the public subset will have the same format
 
Annotation files for the public subset will have the same format
 +
 +
 +
=== 8 class labels ===
 +
In case of the 8 class data, labels are as following:
 +
 +
BD 0 bass drum
 +
SD 1 snare drum
 +
TT 2 any tom-tom
 +
HH 3 hi-hat (any hi-hat like open, half-open, closed, ...)
 +
CY 4 cymbal (any crashed cymbal, e.g.: crash, splash, chinese)
 +
RD 5 ride (not crashed)
 +
CB 6 cowbell (and ride bell)
 +
CL 7 clave / sticks
 +
 +
 +
  name midi label  code
 +
bass drum 36 KD 0
 +
snare drum 38 SD 1
 +
closed hi-hat 42 HH 3
 +
open hi-hat 46 HH 3
 +
pedal hi-hat 44 HH 3
 +
cowbell 56 CB 6
 +
ride bell 53 CB 6
 +
low floor tom 41 TT 2
 +
high floor tom 43 TT 2
 +
low tom 45 TT 2
 +
low-mid tom 47 TT 2
 +
high-mid tom 48 TT 2
 +
high tom 50 TT 2
 +
side stick 37
 +
hand clap 39
 +
ride cymbal 51 RD 5
 +
crash cymbal 49 CY 4
 +
splash cymbal 55 CY 4
 +
chinese cymbal 52 CY 4
 +
shaker, maracas 70
 +
tambourine 54
 +
claves, sticks 75 CL 7
  
 
==Evaluation==
 
==Evaluation==
Line 156: Line 195:
 
Sonic Annotator:
 
Sonic Annotator:
 
   
 
   
[TODO]
+
Contact task captains to define format.
  
 
===Time, Software and Hardware limits===
 
===Time, Software and Hardware limits===
  
Max runtime: [TODO]
+
Max runtime: TBA, we must be able to run it in the time give.
  
 
Software: Preferred Python. May be Matlab, Sonic Annotator.
 
Software: Preferred Python. May be Matlab, Sonic Annotator.
  
 
==Submission closing date==
 
==Submission closing date==
 +
 +
August 11th 2018

Revision as of 11:22, 27 June 2018

Description

Drum transcription is the task of detecting the positions in time and labeling the drum class of drum instrument onsets in polyphonic music. This information is a prerequisite for several applications and can also be used for other high-level MIR tasks. Due to several new approaches recently being presented we propose to reintroduce this task. We will mainly stick to the mode used in the first edition in 2005, but new datasets will be used. Only the three main drum instruments of drum kits for western pop music are considered. These are: bass drum, snare drum, and hi-hat (in all variations like open, closed, pedal, etc.).

In addition to the classic three drum instrument task, we will also run a 8 drum instrument classes task, this year. Separate training and evaluation data will be used for this task.


Data

For evaluation 5 different datasets will be used. We use two of the three datasets from the 2005 drum detection MIREX task as a baseline.

  • CD set
  • KT set

Additionally three new datasets will be used. They contain polyphonic music of different genres, as well as drum only tracks, and some tracks without drums:

  • RBMA set (35 full length, polyphonic tracks, electronically produced and recorded, manually annotated and double-checked)
  • MEDLEY set (23 full length tracks, recorded, manually annotated and double-checked)
  • GEN set (synthesized MIDI drum tracks and loops without accompaniment)


Audio Format

The input for this task is a set of sound files adhering to the format and content requirements mentioned below.

  • All audio is 44100 Hz, 16-bit mono, WAV PCM
  • All available sound files will be used in their entirety (which can be short excerpts of 30s or full length music tracks of up to 7m)
  • Some sound files will be recorded polyphonic music with drums (might be live performances or studio recordings)
  • Some sound files will be rendered audio of MIDI files
  • Some sound files may not contain any drums
  • Both drums mixed with music and solo drums, will be part of the set
  • Tracks with only the three drum instruments (or less) as well as tracks with full drum kits (with instruments not expected to be transcribed) will be part of the set
  • Drum kit sounds will have a broad range: from natural recorded kits, live kits to sampled drums as well as electronic synthesizers


Training Data

  • A representative random subset of the data will be made available to all participants in advance of the evaluation - please contact the task captains!
  • Training data can be used by the participants as they please.
  • Training data will not be used again during the evaluation.
  • Usage of additional training data is discouraged. If additional training data is used, please note so in the submission.

I/O format

The input will be a directory containing audio files in the audio format specified above. There might be other files in the directory, so make sure to filter for ‘*.wav’ files.

The output will also be a directory. The algorithm is expected to process every file and generate an individual *.txt output file for every wav file with the same name. e.g.: input: audio_file_10.wav output: audio_file_10.txt

For transcription three drum instrument types are considered:

BD	0	bass drum
SD	1	snare drum
HH	2	hi-hat (any hi-hat like open, half-open, closed, ...)


Drum types are strictly these types only (so: no ride cymbals in the HH, no toms in the BD, no claps nor side sticks/rim shots in the SD, etc...) This involves the following remapping from other labels to these 3 base labels:


 name			midi	label  code
bass drum		36	KD	0
snare drum		38	SD	1 
closed hi-hat		42 	HH	2
open hi-hat		46	HH	2
pedal hi-hat		44	HH	2
cowbell			56
ride bell		53
low floor tom		41
high floor tom		43
low tom			45
low-mid tom		47
high-mid tom		48
high tom		50
side stick		37
hand clap		39
ride cymbal		51
crash cymbal		49
splash cymbal		55
chinese cymbal		52
shaker, maracas		70
tambourine		54
claves, sticks		75

All annotations are remapped to these three labels in advance (no looking back to the broader labels afterwards).

The annotation files as well as the expected output of the algorithms will have the following format: A text file (UTF-8 encoding) with no header and footer, one line represents an instrument onset with the following format:

<TTT.TTT> \t <LL> \n

Where <TTT.TTT> is a floating point number with 3 decimals (ms accuracy), followed by a tab and <LL> the label for drum instrument onset as defined above (either number, or string), followed by a newline. If multiple onsets occur at the exact same time, two separate lines with the same timestamp are expected.

Example of the content of a output file:

[test_file_0.txt]
<start-of-file>
0.125	0
0.125	2
0.250	2
0.375	1
0.375	2
0.500	2
0.625	0
0.625	2
0.750	2
0.875	1
0.875	2
1.000	2
<end-of-file>

Annotation files for the public subset will have the same format


8 class labels

In case of the 8 class data, labels are as following:

BD	0	bass drum
SD	1	snare drum
TT	2	any tom-tom
HH	3	hi-hat (any hi-hat like open, half-open, closed, ...)
CY	4	cymbal (any crashed cymbal, e.g.: crash, splash, chinese)
RD	5	ride (not crashed)
CB	6	cowbell (and ride bell)
CL	7	clave / sticks


 name			midi	label  code
bass drum		36	KD	0
snare drum		38	SD	1 
closed hi-hat		42 	HH	3
open hi-hat		46	HH	3
pedal hi-hat		44	HH	3
cowbell			56	CB	6
ride bell		53	CB	6
low floor tom		41	TT	2
high floor tom		43	TT	2
low tom			45	TT	2
low-mid tom		47	TT	2
high-mid tom		48	TT	2
high tom		50	TT	2
side stick		37	
hand clap		39	
ride cymbal		51	RD	5
crash cymbal		49	CY	4
splash cymbal		55	CY	4
chinese cymbal		52	CY	4
shaker, maracas		70	
tambourine		54	
claves, sticks		75	CL	7

Evaluation

  • F-measure (harmonic mean of the recall rate and the precision rate, beta parameter 1, so equal importance to prec. and recall) is calculated for each of three drum types (BD, SD, and HH), resulting in three F-measure scores.
  • Additionally a total F-measure score for all onsets over all instrument classes will be calculated.
  • Calculation time measure: the time it takes to do the complete run from the moment your algorithm starts until the moment it stops will be reported

Evaluation parameters:

  • The limit of onset-deviation errors in calculating the above F-measure is 30 ms (so a range of [-30 ms, +30 ms] around the true times)
  • Any parameter adaptation (e.g. for peak picking) must be done on public data, i.e. in advance.

Conditions:

  • The actual drum sounds (sound samples) used in any of the input audio are not public and not used for training.
  • Participants are encouraged to only use the provided training data for training and parameter optimization.

If this is not possible, it should explicitly be stated that and which additional data was used. In this case it would be favorable to submit two versions: one trained with the public data only, and one trained using additional data.


Packaging submissions

  • Participants only send in the application part of their algorithm, not the training part (if there is one)
  • Algorithms must adhere to the specifications on the MIREX web page


Command line calling format

Python:

python <your_script_name.py> -i <inputfolder> -o <outputfolder>

Matlab:

<path_to_matlab>\matlab.exe" -nodisplay -nosplash -nodesktop -r "try, <your_script_name>(<inputfolder>, <outputfolder>), catch me, fprintf('%s / %s\n',me.identifier,me.message), end, exit"

Sonic Annotator:

Contact task captains to define format.

Time, Software and Hardware limits

Max runtime: TBA, we must be able to run it in the time give.

Software: Preferred Python. May be Matlab, Sonic Annotator.

Submission closing date

August 11th 2018