Difference between revisions of "2018:Automatic Lyrics-to-Audio Alignment"

From MIREX Wiki
(update the subtask 1)
(update mandarin lexicons)
Line 2: Line 2:
  
 
The task of automatic lyrics-to-audio alignment has as an end goal the synchronization between an audio recording of singing and its corresponding written lyrics.  The beginning timestamps of lyrics units can be estimated on different granularity: phonemes, words, lyrics lines, phrases.  For this task word-level alignment is required.
 
The task of automatic lyrics-to-audio alignment has as an end goal the synchronization between an audio recording of singing and its corresponding written lyrics.  The beginning timestamps of lyrics units can be estimated on different granularity: phonemes, words, lyrics lines, phrases.  For this task word-level alignment is required.
 +
  
 
==Subtask 1: Mandarin Chinese pop music songs==
 
==Subtask 1: Mandarin Chinese pop music songs==
Line 25: Line 26:
 
===Evaluation Datasets===
 
===Evaluation Datasets===
 
The dataset contains 10 Mandarin Chinese pop music songs collected at the same time as the MIREX 2018 Mandarin pop song training dataset. 5 songs are sung by amateur singers and another 5 songs are source-separated from the mixed recordings.
 
The dataset contains 10 Mandarin Chinese pop music songs collected at the same time as the MIREX 2018 Mandarin pop song training dataset. 5 songs are sung by amateur singers and another 5 songs are source-separated from the mixed recordings.
 +
 +
=== Phonetization ===
 +
We provide the [https://github.com/ronggong/MIREX-2018-Automatic-Lyrics-to-Audio-Alignment/blob/master/lm_pinyin/lexicon.txt pinyin] and [https://github.com/ronggong/MIREX-2018-Automatic-Lyrics-to-Audio-Alignment/blob/master/lm_phone/lexicon.txt phoneme] lexicons which correspond to the lyrics annotations of the training datasets.
  
 
=== Audio Format ===
 
=== Audio Format ===
Line 33: Line 37:
 
* Jingju a cappella singing dataset: 44.1kHz sampling rate, mono channel
 
* Jingju a cappella singing dataset: 44.1kHz sampling rate, mono channel
 
* MIREX 2018 Mandarin pop song dataset: 44.1kHz sampling rate, mono channel
 
* MIREX 2018 Mandarin pop song dataset: 44.1kHz sampling rate, mono channel
 +
  
 
==Subtask 2: English pop music songs==
 
==Subtask 2: English pop music songs==
Line 77: Line 82:
 
* CD-quality (PCM, 16-bit, 44100 Hz)
 
* CD-quality (PCM, 16-bit, 44100 Hz)
 
* single channel (mono) for a cappella and two channels for original
 
* single channel (mono) for a cappella and two channels for original
 +
  
 
==Evaluation==
 
==Evaluation==
Line 98: Line 104:
  
 
Note that evaluation scripts depend on [https://github.com/craffel/mir_eval/ mir_eval].
 
Note that evaluation scripts depend on [https://github.com/craffel/mir_eval/ mir_eval].
 +
  
 
== Submission Format ==
 
== Submission Format ==
Line 146: Line 153:
 
Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions will be imposed.
 
Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions will be imposed.
 
A hard limit of 24 hours will be imposed on analysis times. Submissions exceeding this limit may not receive a result.
 
A hard limit of 24 hours will be imposed on analysis times. Submissions exceeding this limit may not receive a result.
 +
  
 
== Submission opening date ==
 
== Submission opening date ==
 +
  
 
== Submission closing date ==
 
== Submission closing date ==
 +
  
 
== Bibliography ==
 
== Bibliography ==

Revision as of 07:45, 13 May 2018

Description

The task of automatic lyrics-to-audio alignment has as an end goal the synchronization between an audio recording of singing and its corresponding written lyrics. The beginning timestamps of lyrics units can be estimated on different granularity: phonemes, words, lyrics lines, phrases. For this task word-level alignment is required.


Subtask 1: Mandarin Chinese pop music songs

Training Datasets

MIR-1k Dataset

The original MIR-1k dataset can be download here. It contains 1000 song clips which the musical accompaniment and the clean singing voice are recorded at left and right channels, respectively. The duration of each clip ranges from 4 to 13 seconds, and the total length of the dataset is 133 minutes. The original dataset also contains the corresponding lyrics in traditional Mandarin Chinese characters. We automatically converted the lyrics into (1) simplified Mandarin and (2) pinyin format. Here is the link. The pinyin format lyrics are manually corrected.

Jingju a cappella singing Dataset

Jingju (also known as Peking or Beijing opera) is a form of Chinese opera which combines music, vocal performance, mime, dance and acrobatics. The language used in jingju is a combination of Beijing Mandarin and Jiangsu, Anhui, Hubei dialects. The jingju a cappella singing dataset has 3 parts. Each contains annotations (annotation_txt files) at phrase and syllable levels in simplified Mandarin characters and pinyin formats:

The pinyin annotation is manually verified.

MIREX 2018 Mandarin pop song dataset

The dataset contains 20 Mandarin Chinese pop music songs with annotations of beginning and ending timestamps of each phrase. The lyrics are in both simplified Mandarin and pinyin formats. The dataset has two parts:

  • Clean singing: 10 songs are sung by amateur singers. Google drive link Baidu pan link
  • Separated singing: 10 songs are source-separated from the mixed recordings. (Link to be announced)

The pinyin lyrics are manually verified.

Evaluation Datasets

The dataset contains 10 Mandarin Chinese pop music songs collected at the same time as the MIREX 2018 Mandarin pop song training dataset. 5 songs are sung by amateur singers and another 5 songs are source-separated from the mixed recordings.

Phonetization

We provide the pinyin and phoneme lexicons which correspond to the lyrics annotations of the training datasets.

Audio Format

All recordings used for subtask 1 are in wav format.

  • MIR-1k dataset: 16kHz sampling rate, left channel - musical accompaniment, right channel - clean singing voice
  • Jingju a cappella singing dataset: 44.1kHz sampling rate, mono channel
  • MIREX 2018 Mandarin pop song dataset: 44.1kHz sampling rate, mono channel


Subtask 2: English pop music songs

Training Dataset

The DAMP dataset contains a large number (34 000+) of a cappella recordings from a wide variety of amateur singers, collected with the Sing! Karaoke mobile app in different recording conditions, but generally with good audio quality. A carefully curated subset DAMPB of 20 performances of each of the 300 songs has been created by (Kruspe, 2016). Here is the list of recordings.

Evaluation Datasets

Hansen's Dataset

The dataset contains 9 pop music songs in English with annotations of both beginnings- and ending-timestamps of each word. The ending timestamps are for convenience (copies of next word's beginning timestamp) and are not used in the evaluation. Non-vocal segments are assigned a special word BREATH*. Sentence-level annotations are also provided. The audio has two versions: the original with instrumental accompaniment and a cappella singing voice only one. An example song can be seen here

You can read in detail about how the dataset was made here: Recognition of Phonemes in A-cappella Recordings using Temporal Patterns and Mel Frequency Cepstral Coefficients. The dataset has been kindly provided by Jens Kofod Hansen.

  • file duration up to 4:40 minutes (total time: 35:33 minutes)
  • 3590 words annotated in total

Mauch's Dataset

The dataset contains 20 pop music songs in English with annotations of beginning-timestamps of each word. Non-vocal sections are not explicitly annotated (but remain included in the last preceding word). We prefer to leave it this way, to enable comparison to previous work, evaluated on this dataset. The audio has instrumental accompaniment. An example song can be seen here "_" are used instead of "'" in the annotation.

You can read in detail about how the dataset was used for the first time here: Integrating Additional Chord Information Into HMM-Based Lyrics-to-Audio Alignment. The dataset has been kindly provided by Sungkyun Chang.

  • file duration up to 5:40 (total time: 1:19:12 hours)
  • 5050 words annotated in total

Gracenote Dataset

The dataset contains 15 pop music song excerpts in English with annotations of beginning-timestamps of each word. 8 song excerpts have instrumental accompaniment. The other 7 song excerpts have has two versions: with instrumental accompaniment and a cappella singing.

  • file duration up to 1:11 (total time: 11:42 minutes)
  • 1181 words annotated in total

Phonetization

A popular choice for phonetization of the words is the CMU pronunciation dictionary. One can phonetize them with the online tool. A list of all words of both datasets, which are outside of the list of CMU words is given here.

Audio Format

The data are sound wav/mp3 files, plus the associated word boundaries (in csv-like .txt/.tsv files)

  • CD-quality (PCM, 16-bit, 44100 Hz)
  • single channel (mono) for a cappella and two channels for original


Evaluation

The submitted algorithms for both subtasks will be evaluated at the word boundaries for the original mixed songs (a cappella singing + instrumental accompaniment). Evaluation metrics only on the a cappella singing will be reported as well, for the sake of getting insights on the impact of instrumental accompaniment on the algorithm, but will not be considered for the ranking.

Average absolute error/deviation Initially utilized in Mesaros and Virtanen (2008), the absolute error measures the time displacement between the actual timestamp and its estimate at the beginning and the end of each lyrical unit. The error is then averaged over all individual errors. An error in absolute terms has the drawback that the perception of an error with the same duration can be different depending on the tempo of the song. Here is a test of using this metric.

Percentage of correct segments The perceptual dependence on tempo is mitigated by measuring the percentage of the total length of the segments, labeled correctly to the total duration of the song. This metric is suggested by Fujihara et al. (2011), Figure 9. Here is a test of using this metric.

Percentage of correct estimates according to a tolerance window A metric that takes into consideration that the onset displacements from ground truth below a certain threshold could be tolerated by human listeners. We use 0.3 seconds as the tolerance window. This metric is suggested in Integrating Additional Chord Information Into HMM-Based Lyrics-to-Audio Alignment. Here is a test of using this metric.

For more detailed definition and formulas about the metrics, please check the section 2.2.1 of this thesis.

To obtain all three metrics for one detected output:

python eval.py <file path of the reference word boundaries> <file path of the detected word boundaries>

Note that evaluation scripts depend on mir_eval.


Submission Format

Submissions to this task will have to conform to a specified format detailed below. Submissions should be packaged and contain at least two files: The algorithm itself and a README containing contact information and detailing, in full, the use of the algorithm.

Input Data

Participating algorithms will have to read audio in the following format:

  • Audio for the original songs in wav (stereo)
  • Lyrics in .txt file where each word is separated by a space, each lyrics line is separated by a new line.

Output File Format

The alignment output file format is a tab-delimited ASCII text format.

Three column text file of the format

<onset_time(sec)>\t<offset_time(sec)>\t<word>\n
<onset_time(sec)>\t<offset_time(sec)>\t<word>\n
...

where \t denotes a tab, \n denotes the end of the line. The < and > characters are not included. An example output file would look something like:

0.000    5.223    word1
5.223    15.101   word2
15.101   20.334   word3

NOTE: the end timestamps column is utilized only by the percentage of correct segments metric. Therefore skipping the second column is acceptable, and could result in degraded performance of this respective metric only.

Command line calling format

The submitted algorithm must take as arguments .wav file, .txt file as well as the full output path and filename of the output file. The ability to specify the output path and file name is essential. Denoting the input .wav file path and name as %input_audio; the lyrics .txt file as %input_txt and the output file path and name as %output, a program called foobar could be called from the command-line as follows:

foobar %input_audio %input_txt %output
foobar -i %input_audio -it %input_txt  -o %output


README File

A README file accompanying each submission should contain explicit instructions on how to run the program (as well as contact information, etc.). In particular, each command line to run should be specified, using %input for the input sound file and %output for the resulting text file.

Packaging submissions

Please provide submissions as a binary or source code.

Time and hardware limits

Due to the potentially high number of participants in this and other audio tasks, hard limits on the runtime of submissions will be imposed. A hard limit of 24 hours will be imposed on analysis times. Submissions exceeding this limit may not receive a result.


Submission opening date

Submission closing date

Bibliography

Chang, S., & Lee, K. (2017). Lyrics-to-Audio Alignment by Unsupervised Discovery of Repetitive Patterns in Vowel Acoustics. arXiv preprint arXiv:1701.06078.

Dzhambazov, G. (2017). Knowledge-based probabilistic modeling for tracking lyrics in music audio signals, Ph.D. Thesis

Kruspe, A. (2016). Bootstrapping a System for Phoneme Recognition and Keyword Spotting in Unaccompanied Singing, ISMIR 2016

Mesaros, A. (2013). Singing voice identification and lyrics transcription for music information retrieval invited paper. 2013 7th Conference on Speech Technology and Human-Computer Dialogue (SpeD), 1-10.

Fujihara, H., & Goto, M. (2012). Lyrics-to-audio alignment and its application. In Dagstuhl Follow-Ups (Vol. 3). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik.

Mauch, M., Fujihara, H., & Goto, M. (2012). Integrating additional chord information into HMM-based lyrics-to-audio alignment. IEEE Transactions on Audio, Speech, and Language Processing, 20(1), 200-210.