This is done using a maximum entropy model (call it MAXENT).
Then, the remaining constituents are ordered using a second maximum entropy model (MAXENTZ).
The maximum entropy model for both steps rely on the following features:
We use a maximum entropy classifier to predict translation errors by integrating word posterior probability feature and linguistic features.
|Conclusions and Future Work|
In this paper, we have presented a maximum entropy based approach to automatically detect errors in translation hypotheses generated by SMT
|Error Detection with a Maximum Entropy Model|
For classification, we employ the maximum entropy model (Berger et al., 1996) to predict whether a word 21) is correct or incorrect given its feature vector p.
We integrate two sets of linguistic features into a maximum entropy (MaxEnt) model and develop a MaxEnt-based binary classifier to predict the category (correct or incorrect) for each word in a generated target sentence.
We use a Maximum Entropy (Berger et al., 1996) classifier with a large number of boolean features, some of which are novel (e. g., the inclusion of words from WordNet definitions).
Maximum Entropy classifiers have been effective on a variety of NLP problems including preposition sense disambiguation (Ye and Baldwin, 2007), which is somewhat similar to noun compound interpretation.
The results for these runs using the Maximum Entropy classifier are presented in Table 4.