Index of papers in Proc. ACL 2009 that mention
  • MaXEnt
Huang, Fei
Alignment Link Confidence Measure
We combine the HMM alignment, the BM alignment and the MaXEnt alignment (ME) using the above link selection algorithm.
Alignment Link Confidence Measure
Figure 3 shows such an example, where alignment errors in the MaXEnt alignment are shown with dotted lines.
Improved MaXEnt Aligner with Confidence-based Link Filtering
In addition to the alignment combination, we also improve the performance of the MaXEnt aligner through confidence-based alignment link filtering.
Improved MaXEnt Aligner with Confidence-based Link Filtering
Here we select the MaXEnt aligner because it has
Introduction
In section 4 we show how to improve a MaXEnt word alignment quality by removing low confidence alignment links, which also leads to improved translation quality as shown in section 5.
Related Work
Regarding word alignment combination, in addition to the commonly used ”intersection-union-refine” approach (Och and Ney, 2003), (Ayan and Dorr, 2006b) and (Ayan et al., 2005) combined alignment links from multiple word alignment based on a set of linguistic and alignment features within the MaXEnt framework or a neural net model.
Sentence Alignment Confidence Measure
HMM 54.72 -0.710 BM 62.53 -0.699 MaxEnt 69.26 -0.699
Sentence Alignment Confidence Measure
We randomly selected 512 Chinese-English (CE) sentence pairs and generated word alignment using the MaxEnt aligner (Ittycheriah and Roukos, 2005).
Sentence Alignment Confidence Measure
For each sentence pair in the CE test set, we calculate the confidence scores of the HMM alignment, the Block Model alignment and the MaXEnt alignment, then select the alignment with the highest confidence score.
Translation
We extract phrase translation tables from the baseline MaXEnt word alignment as well as the alignment with confidence-based link filtering, then translate the test set with each phrase translation table.
MaXEnt is mentioned in 12 sentences in this paper.
Topics mentioned in this paper:
DeNero, John and Chiang, David and Knight, Kevin
Consensus Decoding Algorithms
The standard Viterbi decoding objective is to find 6* = arg maxe A - 6( f, e).
Consensus Decoding Algorithms
6 = arg maxe EP(e/|f) [8(6; (3’)] arg maxe Z P(e’|f) - 8(6; 6’)
Consensus Decoding Algorithms
arg maerE EP(e’|f) [3(6; 6/” = arg maxe Z P(e/|f) - ij'(€) ° ¢j(el) e’EE j
MaXEnt is mentioned in 6 sentences in this paper.
Topics mentioned in this paper: