Difference between revisions of "2018:Patterns for Prediction Results"

From MIREX Wiki
m (Discussion)
m (Figures)
Line 85: Line 85:
 
'''Figure 2.''' Establishment precision averaged over each piece/movement. Establishment precision answers the following question. On average, how similar is the most similar ground-truth pattern prototype to an algorithm-output pattern?
 
'''Figure 2.''' Establishment precision averaged over each piece/movement. Establishment precision answers the following question. On average, how similar is the most similar ground-truth pattern prototype to an algorithm-output pattern?
  
[[File:2017_Mono_F1_est.png|600px]]
 
 
'''Figure 3.''' Establishment F1 averaged over each piece/movement. Establishment F1 is an average of establishment precision and establishment recall.
 
 
[[File:2017_Mono_R_occ_75.png|600px]]
 
 
'''Figure 4.''' Occurrence recall (<math>c = .75</math>) averaged over each piece/movement. Occurrence recall answers the following question. On average, how similar is the most similar set of algorithm-output pattern occurrences to a discovered ground-truth occurrence set?
 
 
[[File:2017_Mono_P_occ_75.png|600px]]
 
 
'''Figure 5.''' Occurrence precision (<math>c = .75</math>) averaged over each piece/movement. Occurrence precision answers the following question. On average, how similar is the most similar discovered ground-truth occurrence set to a set of algorithm-output pattern occurrences?
 
 
[[File:2017_Mono_F1_occ75.png|600px]]
 
 
'''Figure 6.''' Occurrence F1 (<math>c = .75</math>) averaged over each piece/movement. Occurrence F1 is an average of occurrence precision and occurrence recall.
 
 
[[File:2017_Mono_R3.png|600px]]
 
 
'''Figure 7.''' Three-layer recall averaged over each piece/movement. Rather than using <math>|P \cap Q|/\max\{|P|, |Q|\}</math> as a similarity measure (which is the default for establishment recall), three-layer recall uses <math>2|P \cap Q|/(|P| + |Q|)</math>, which is a kind of F1 measure.
 
 
[[File:2017_Mono_P3.png|600px]]
 
 
'''Figure 8.''' Three-layer precision averaged over each piece/movement. Rather than using <math>|P \cap Q|/\max\{|P|, |Q|\}</math> as a similarity measure (which is the default for establishment precision), three-layer precision uses <math>2|P \cap Q|/(|P| + |Q|)</math>, which is a kind of F1 measure.
 
 
[[File:2017_Mono_TLF1.png|600px]]
 
 
'''Figure 9.''' Three-layer F1 (TLF) averaged over each piece/movement. TLF is an average of three-layer precision and three-layer recall.
 
 
[[File:Mono_Coverage.png|600px]]
 
 
'''Figure 10.''' Coverage of the discovered patterns of each piece/movement. Coverage measures the fraction of notes of a piece covered by discovered patterns.
 
 
[[File:2017_Mono_LC.png|600px]]
 
 
'''Figure 11.''' Lossless compression achieved by representing each piece/movement in terms of patterns discovered by a given algorithm. Next to patterns and their repetitions, also the uncovered notes are represented, such that the complete piece could be reconstructed from the compressed representation.
 
  
 
===SymPoly===
 
===SymPoly===
Line 130: Line 95:
  
 
'''Figure 13.''' Establishment precision averaged over each piece/movement. Establishment precision answers the following question. On average, how similar is the most similar ground-truth pattern prototype to an algorithm-output pattern?
 
'''Figure 13.''' Establishment precision averaged over each piece/movement. Establishment precision answers the following question. On average, how similar is the most similar ground-truth pattern prototype to an algorithm-output pattern?
 
[[File:2017_Poly_F1_est.png|600px]]
 
 
'''Figure 14.''' Establishment F1 averaged over each piece/movement. Establishment F1 is an average of establishment precision and establishment recall.
 
 
[[File:2017_Poly_R_occ_75.png|600px]]
 
 
'''Figure 15.''' Occurrence recall (<math>c = .75</math>) averaged over each piece/movement. Occurrence recall answers the following question. On average, how similar is the most similar set of algorithm-output pattern occurrences to a discovered ground-truth occurrence set?
 
 
[[File:2017_Poly_P_occ_75.png|600px]]
 
 
'''Figure 16.''' Occurrence precision (<math>c = .75</math>) averaged over each piece/movement. Occurrence precision answers the following question. On average, how similar is the most similar discovered ground-truth occurrence set to a set of algorithm-output pattern occurrences?
 
 
[[File:2017_Poly_F1_occ_75.png|600px]]
 
 
'''Figure 17.''' Occurrence F1 (<math>c = .75</math>) averaged over each piece/movement. Occurrence F1 is an average of occurrence precision and occurrence recall.
 
 
[[File:2017_Poly_R3.png|600px]]
 
 
'''Figure 18.''' Three-layer recall averaged over each piece/movement. Rather than using <math>|P \cap Q|/\max\{|P|, |Q|\}</math> as a similarity measure (which is the default for establishment recall), three-layer recall uses <math>2|P \cap Q|/(|P| + |Q|)</math>, which is a kind of F1 measure.
 
 
[[File:2017_Poly_P3.png|600px]]
 
 
'''Figure 19.''' Three-layer precision averaged over each piece/movement. Rather than using <math>|P \cap Q|/\max\{|P|, |Q|\}</math> as a similarity measure (which is the default for establishment precision), three-layer precision uses <math>2|P \cap Q|/(|P| + |Q|)</math>, which is a kind of F1 measure.
 
 
[[File:2017_Poly_TLF1.png|600px]]
 
 
'''Figure 20.''' Three-layer F1 (TLF) averaged over each piece/movement. TLF is an average of three-layer precision and three-layer recall.
 
 
[[File:2017_Poly_Coverage.png|600px]]
 
 
'''Figure 21.''' Coverage of the discovered patterns of each piece/movement. Coverage measures the fraction of notes of a piece covered by discovered patterns.
 
 
[[File:2017_Poly_LC.png|600px]]
 
 
'''Figure 22.''' Lossless compression achieved by representing each piece/movement in terms of patterns discovered by a given algorithm. Next to patterns and their repetitions, also the uncovered notes are represented, such that the complete piece could be reconstructed from the compressed representation.
 
  
 
==Tables==
 
==Tables==

Revision as of 09:16, 18 September 2018

Introduction

THIS PAGE IS UNDER CONSTRUCTION!

The task: ...

Contribution

...

For a more detailed introduction to the task, please see 2018:Patterns for Prediction.

Training and Test Datasets

...


Sub code Submission name Abstract Contributors
Task Version symMono
EN1 Algo name here PDF Eric Nichols
FC1 Algo name here PDF Florian Colombo
MM Markov model N/A Intended as 'baseline'
Task Version symPoly
FC1 Algo name here PDF Florian Colombo
MM Markov model N/A Intended as 'baseline'

Table 1. Algorithms submitted to Patterns for Prediction 2018.

Results

An intro spiel here...

(For mathematical definitions of the various metrics, please see 2018:Patterns_for_Prediction#Evaluation_Procedure.)

SymMono

Here are some results (cf. Figures 1-3), and some interpretation. Don't forget these as well (Figures 4-6), showing something.

Remarks on runtime appropriate here too.

SymPoly

And so on.

Discussion

...

Berit Janssen, Iris Ren, Tom Collins.

Figures

SymMono

2017 Mono R est.png

Figure 1. Establishment recall averaged over each piece/movement. Establishment recall answers the following question. On average, how similar is the most similar algorithm-output pattern to a ground-truth pattern prototype?

2017 Mono P est.png

Figure 2. Establishment precision averaged over each piece/movement. Establishment precision answers the following question. On average, how similar is the most similar ground-truth pattern prototype to an algorithm-output pattern?


SymPoly

2017 Poly R est.png

Figure 12. Establishment recall averaged over each piece/movement. Establishment recall answers the following question. On average, how similar is the most similar algorithm-output pattern to a ground-truth pattern prototype?

2017 Poly P est.png

Figure 13. Establishment precision averaged over each piece/movement. Establishment precision answers the following question. On average, how similar is the most similar ground-truth pattern prototype to an algorithm-output pattern?

Tables

SymMono

Click to download SymMono pattern retrieval results table

Click to download SymMono compression results table

SymPoly

Click to download SymPoly pattern retrieval results table

Click to download SymPoly compression results table