Index of papers in Proc. ACL 2013 that mention
  • maximum entropy
Liu, Yang
Abstract
To resolve conflicts in shift-reduce parsing, we propose a maximum entropy model trained on the derivation graph of training data.
Introduction
3 A Maximum Entropy Based Shift-Reduce Parsing Model
Introduction
We propose a maximum entropy model to resolve the conflicts for “h+h”: 2
Introduction
1. relative frequencies in two directions; 2. lexical weights in two directions; 3. phrase penalty; 4. distance-based reordering model; 5. lexicaized reordering model; 6. n-gram language model model; 7. word penalty; 8. ill-formed structure penalty; 9. dependency language model; 10. maximum entropy parsing model.
maximum entropy is mentioned in 10 sentences in this paper.
Topics mentioned in this paper:
Zhai, Feifei and Zhang, Jiajun and Zhou, Yu and Zong, Chengqing
Abstract
Then we propose two novel methods to handle the two PAS ambiguities for SMT accordingly: 1) inside context integration; 2) a novel maximum entropy PAS disambiguation (MEPD) model.
Conclusion and Future Work
Towards the MEPD model, we design a maximum entropy model for each ambitious source-side PASs.
Introduction
As to the role ambiguity, we design a novel maximum entropy PAS disambiguation (MEPD) model to combine various context features, such as context words of PAS.
Maximum Entropy PAS Disambiguation (MEPD) Model
In order to handle the role ambiguities, in this section, we concentrate on utilizing a maximum entropy model to incorporate the context information for PAS disambiguation.
Maximum Entropy PAS Disambiguation (MEPD) Model
The maximum entropy model is the classical way to handle this problem:
Maximum Entropy PAS Disambiguation (MEPD) Model
We train a maximum entropy classifier for each Sp via the off-the-shelf MaxEnt toolkit3.
PAS-based Translation Framework
Thus to overcome this problem, we design two novel methods to cope with the PAS ambiguities: inside-context integration and a maximum entropy PAS disambiguation (MEPD) model.
Related Work
(2010) designed maximum entropy (ME) classifiers to do better rule section for hierarchical phrase-based model and tree-to-string model respectively.
maximum entropy is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Metallinou, Angeliki and Bohus, Dan and Williams, Jason
Generative state tracking
In this work we will use maximum entropy models.
Generative state tracking
4.1 Maximum entropy models
Generative state tracking
The maximum entropy framework (Berger et al.
maximum entropy is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Goto, Isao and Utiyama, Masao and Sumita, Eiichiro and Tamura, Akihiro and Kurohashi, Sadao
Experiment
The L-BFGS method (Liu and Nocedal, 1989) was used to estimate the weight parameters of maximum entropy models.
Experiment
The maximum entropy method with Gaussian prior smoothing was used to estimate the model parameters.
Proposed Method
In this work, we use the maximum entropy method (Berger et al., 1996) as a discriminative machine learning method.
Proposed Method
The reason for this is that a model based on the maximum entropy method can calculate probabilities.
maximum entropy is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Feng, Minwei and Peter, Jan-Thorsten and Ney, Hermann
Comparative Study
4.2 Maximum entropy reordering model
Comparative Study
(Zens and Ney, 2006) proposed a maximum entropy classifier to predict the orientation of the next phrase given the current phrase.
Introduction
The classifier can be trained with maximum likelihood like Moses lexicalized reordering (Koehn et al., 2007) and hierarchical lexicalized reordering model (Galley and Manning, 2008) or be trained under maximum entropy framework (Zens and Ney, 2006).
maximum entropy is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Scheible, Christian and Schütze, Hinrich
Distant Supervision
To increase coverage, we train a Maximum Entropy (MaxEnt) classifier (Manning and Klein,
Features
Following previous sequence classification work with Maximum Entropy models (e. g., (Ratna-parkhi, 1996)), we use selected features of adjacent sentences.
Sentiment Relevance
We divide both the SR and P&L corpora into training (50%) and test sets (50%) and train a Maximum Entropy (MaxEnt) classifier (Manning and Klein, 2003) with bag-of-word features.
maximum entropy is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Xiang, Bing and Luo, Xiaoqiang and Zhou, Bowen
Conclusions and Future Work
In this paper, we presented a novel structured approach to EC prediction, which utilizes a maximum entropy model with various syntactic features and shows significantly higher accuracy than the state-of-the-art approaches.
Experimental Results
We parse our test set with a maximum entropy based statistical parser (Ratna—parkhi, 1997) first.
Related Work
A maximum entropy model is utilized to predict the tags, but different types of ECs are not distinguished.
maximum entropy is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: