Conclusion and related work | PTB CTB uas compl uas compl 91.77 45.29 84.54 33.75 221 92.29 46.28 85.11 34.62 124 92.50 46.82 85.62 37.11 71 92.74 48.12 86.00 35.87 39 |
Conclusion and related work | ‘uas’ and ‘compl’ denote unlabeled score and complete match rate respectively (all excluding punctuations). |
Conclusion and related work | Systems s uas compl |
Experiments | In particular, we achieve 86.33% uas on CTB which is 1.54% uas improvement over the greedy baseline parser. |
Experiments | UAS |
Experiments | UAS : unlabeled attachment score, LAS: labeled attachment score. |
Experiments | Approach UAS ‘ LAS | Time Zhang and Clark (2008) 92.1 |
Evaluation | For the analysis of the results, we then measured the effectiveness of the constraints using two derived variables: the Collective F unniness (CF) of a message is its mean funniness, while its Upper Agreement ( UA (t)) is the fraction of funniness scores greater than or equal to a given threshold 75. |
Evaluation | To rank the generated messages, we take the product of Collective Funniness and Upper Agreement UA (3) and call it the overall Humor Eflectiveness (HE). |
Evaluation | The Upper Agreement UA (4) increases from 0.18 to 0.36 and to 0.43, respectively. |
Experimental Assessment | ‘ parser | iter | UAS ‘ LAS | UEM ‘ arc-standard 23 90.02 87.69 38.33 arc-eager 12 90.18 87.83 40.02 this work 30 91.33 89.16 42.38 arc-standard + easy-first 21 90.49 88.22 39.61 arc-standard + spine 27 90.44 88.23 40.27 |
Experimental Assessment | Table 2: Accuracy on test set, excluding punctuation, for unlabeled attachment score ( UAS ), labeled attachment score (LAS), unlabeled exact match (UEM). |
Experimental Assessment | Considering UAS , our parser provides an improvement of 1.15 over the arc-eager parser and an improvement of 1.31 over the arc-standard parser, that is an error reduction of ~12% and ~13%, respectively. |