Difference between revisions of "2015:Set List Identification"

From MIREX Wiki
(Sub task 2)
Line 73: Line 73:
 
end time of identification result:<math>eBD_{ID}</math>
 
end time of identification result:<math>eBD_{ID}</math>
  
<big><math> AVGsBD = \frac{|sBD_{GT}\space -\space sBD_{ID}|}{N}  </math></big>,
+
<big><math> AVGsBD = \frac{|sBD_{GT} - sBD_{ID}|}{N}  </math></big>,
  
<big><math> AVGeBD = \frac{|eBD_{GT}\space -\space eBD_{ID}|}{N}  </math></big>,
+
<big><math> AVGeBD = \frac{|eBD_{GT} - eBD_{ID}|}{N}  </math></big>,
  
 
=== Runtime performance ===
 
=== Runtime performance ===

Revision as of 09:12, 11 May 2015

Description

This task is new for 2015!

This task requires that algorithm identify the set list (list of song sequence) in live concert. (See Set list)

Recently, more and more full-length live concert videos have become available on website (e.g. Youtube). Most of them are lacking sufficient information to describe itself, such as the set list, and start/end time of each song. In this task, we collect the audio of live concerts and studio songs, applying music information retrieval techniques to answer this question -- what songs had been sung in this concert and when are the songs start and end.

For the first step of this task, we assume that which artist is known. In the live concert, the performers play their studio songs only, however the ultimate goal is granted a full-length live concert audio and studio song database, we still can find out the set list and the start/end time of each song.

here are two sub tasks in this task:

Sub task 1: Set list identify

  • To identify the set list (only song sequences) of a live concert.

Assign a live concert audio and studio songs dataset of specific artist. Assuming all songs in live concert are included in studio songs dataset, to identify the set list of this live concert.

Sub task 2: Boundary identify

  • To identify the start/end time of set list

Assign a live concert audio, the set list of live concert and studio songs dataset of specific artist, to identify start time and end time of each song in live concert.

Data

To satisfy our assessment, we pre-process all audio -- removing the "out of artist song" form the audio. (See the description)

We provide two set for this task,participating algorithms will have to read audio in the following format.

  • Sample rate: 44.1 KHz
  • Sample size: 16 bit
  • Number of channels: 1 (mono)
  • Encoding: WAV

Developing set

This set contain 3 artists and 7 live concerts, the following information would be release

  • artist
  • live concert name and links
  • studio collection list
  • start/end time tags

Collection statistics:

  • 3 artists
  • 7 live concerts
  • 279 tracks

Testing set

This set contain 7 artists and 13 live concerts, no information would be release.

Collection statistics:

  • 7 artists
  • 13 live concerts
  • 873 tracks

Evaluation

Sub task 1

Sub task 2

  • average time boundary

set list contains songs

Start time of ground truth:

end time of ground truth:

Start time of identification result:

end time of identification result:

,

,

Runtime performance

In addition computation times for feature extraction and training/classification will be measured.

Submission Format

  • \n is end of line

Submission to this task will have to conform to a specified format detailed below.

Implementation details

we recommend your submission folder construction as follow:

/root_folder/... all the code you submitted
/root_folder/extract_feature/... all feature your extracted
/root_folder/output/... the folder to save results

Sub task 1

Input file

The input for studio songs list file format will be of the form:

/path/to/artist_1/studio/song/001.wav\n  1st
/path/to/artist_1/studio/song/002.wav\n  2nd
/path/to/artist_1/studio/song/003.wav\n  3rd
... 

The input for live concert list file format will be of the form:

/path/to/artist_1/live/concert/001.wav\n

Output file

The output is a list file (song ID sequence), the song ID is the order of input list file, not the file name of *.wav file.

3\n   <-- 003.wav is the first song of set list for your identification result
17\n
59\n
...

Sub task 2

Input file

The input is a list of song ID (song ID sequence), the song ID is the order of studio songs list file.

Your system should read the *.wav file according that order and find the time boundary of the song.

3\n
17\n
59\n
...

Output file

The output for studio songs time boundary list file format will be of the form:

  • please round the time boundary to millisecond
  • \t is tab space
Start time                           end time
hours.minutes.seconds.milliseconds \t hours.minutes.seconds.milliseconds\n  ...for input input sond ID:3
hours.minutes.seconds.milliseconds \t hours.minutes.seconds.milliseconds\n  ...for input input sond ID:17
hours.minutes.seconds.milliseconds \t hours.minutes.seconds.milliseconds\n  ...for input input sond ID:59
... 

Examples:

0.7.23.521    0.13.24.512
0.14.3.021    0.19.53.38
0.20.9.893    0.27.15.987
...
...
0.56.22.433    1.1.46.593
1.3.51.146    1.9.21.138
...

Packaging submissions

All submissions should be statically linked to all libraries (the presence of dynamically linked libraries cannot be guarenteed).

All submissions should include a README file including the following the information:

  • Which task you want to participate (sub task1, sub task2 or all)
  • Command line calling format for all executables and an example formatted set of commands
  • Number of threads/cores used or whether this should be specified on the command line
  • Expected memory footprint
  • Expected runtime
  • Any required environments (and versions), e.g. python, java, bash, matlab.

Time and hardware limits

Due to the potentially high number of particpants in this and other audio tasks, hard limits on the runtime of submissions are specified.

A hard limit of 72 hours will be imposed on runs (total feature extraction and querying times). Submissions that exceed this runtime may not receive a result.

Potential Participants

name / email