Index of papers in Proc. ACL 2012 that mention
  • maximum entropy
Meng, Xinfan and Wei, Furu and Liu, Xiaohua and Zhou, Ming and Xu, Ge and Wang, Houfeng
Experiment
Following the description in (Lu et al., 2011), we remove neutral sentences and keep only high confident positive and negative sentences as predicted by a maximum entropy classifier trained on the labeled data.
Experiment
This model use English labeled data and Chinese labeled data to obtain initial parameters for two maximum entropy classifiers (for English documents and Chinese documents), and then conduct EM-iterations to update the parameters to gradually improve the agreement of the two monolingual classifiers on the unlabeled parallel sentences.
Related Work
(2002) compare the performance of three commonly used machine learning models (Naive Bayes, Maximum Entropy and SVM).
Related Work
They propose a method of training two classifiers based on maximum entropy formulation to maximize their prediction agreement on the parallel corpus.
maximum entropy is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Xiong, Deyi and Zhang, Min and Li, Haizhou
Argument Reordering Model
After all features are extracted, we use the maXimum entropy toolkit in Section 3.3 to train the maXimum entropy classifier as formulated in Eq.
Predicate Translation Model
The essential component of our model is a maXimum entropy classifier pt(e|C that predicts the target translation 6 for a verbal predicate 2} given its surrounding context C(v).
Predicate Translation Model
This will increase the number of classes to be predicted by the maximum entropy classifier.
Predicate Translation Model
Using these events, we train one maximum entropy classifier per verbal predicate (16,121 verbs in total) via the off-the-shelf MaxEnt toolkit3.
maximum entropy is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Zweig, Geoffrey and Platt, John C. and Meek, Christopher and Burges, Christopher J.C. and Yessenalina, Ainur and Liu, Qiang
Sentence Completion via Language Modeling
3.2 Maximum Entropy Class-Based N-gram Language Model
Sentence Completion via Language Modeling
The key ideas are the modeling of word n—gram probabilities with a maximum entropy model, and the use of word—class information in the definition of the features.
Sentence Completion via Language Modeling
Both components are themselves maximum entropy n—gram models in which the probability of a word or class label l given history h is determined by %exp(zk fk(h, The features fk(h, l) used are the presence of various patterns in the concatenation of hl, for example whether a particular suffix is present in hl.
maximum entropy is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: