Conclusion and Future Work | In this paper, we only tried Dice coefficient of n-grams and symmetrical sentence level BLEU as similarity measures . |
Conclusion and Future Work | In the future, we will explore other consensus features and other similarity measures , which may take document level information, or syntactic and semantic information into consideration. |
Experiments and Results | Instead of using graph-based consensus confidence as features in the log-linear model, we perform structured label propagation (Struct-LP) to re-rank the n-best list directly, and the similarity measures for source sentences and translation candidates are symmetrical sentence level BLEU (equation (10)). |
Features and Training | Tl(e,e') is the propagating probability in equation (8), with the similarity measure Sim(e,e') defined as the Dice coefficient over the set of all n-grams in e and those in e'. |
Features and Training | defined in equation (3), takes symmetrical sentence level BLEU as similarity measure ]: |
Features and Training | In theory we could use other similarity measures such as edit distance, string kernel. |
Graph-based Structured Learning | wilj defines the weight of the edge, which is a similarity measure between nodes i and j. |
Graph-based Structured Learning | Propagation probability TS (f, f ') is as defined in equation (3), and Tl(e,e') is defined given some similarity measure sim(e, 6') between labels 6 and |
Discussion and Future Work | 5.2 Fractional Similarity Measures |
Discussion and Future Work | In contrast, the linear-programming based TESLA metric allows fractional similarity measures between 0 (completely unrelated) and l (exact synonyms). |
Discussion and Future Work | Supporting fractional similarity measures is nontrivial in the TESLA-CELAB framework. |