Index of papers in Proc. ACL 2014 that mention
  • model parameters
Andrews, Nicholas and Eisner, Jason and Dredze, Mark
Detailed generative story
This is a conditional log-linear model parameterized by qb, where gbk, ~ N(0, 0,3).
Overview and Related Work
For learning, we iteratively adjust our model’s parameters to better explain our samples.
Overview and Related Work
(2012) we use topics as the contexts, but learn mention topics jointly with other model parameters .
Parameter Estimation
E-step: Collect samples by MCMC simulation as in §5, given current model parameters 6 and qb.
model parameters is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Gormley, Matthew R. and Mitchell, Margaret and Van Durme, Benjamin and Dredze, Mark
Approaches
Unsupervised Grammar Induction Our first method for grammar induction is fully unsupervised Viterbi EM training of the Dependency Model with Valence (DMV) (Klein and Manning, 2004), with uniform initialization of the model parameters .
Related Work
(2010a) show that Viterbi (hard) EM training of the DMV with simple uniform initialization of the model parameters yields higher accuracy models than standard soft-EM
Related Work
In Viterbi EM, the E—step finds the maximum likelihood corpus parse given the current model parameters .
model parameters is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Lei, Tao and Xin, Yu and Zhang, Yuan and Barzilay, Regina and Jaakkola, Tommi
Experimental Setup
We should note that since our model parameter A is represented and learned in the low-rank form, we only have to store and maintain the low-rank projections U gbh, ngm and nghm rather than explicitly calculate the feature tensor gbh®gbm®gbh,m.
Problem Formulation
We will directly learn a low-rank tensor A (because r is small) in this form as one of our model parameters .
Problem Formulation
where 6 6 RL, U 6 RM”, V 6 RM”, and W E 1Rer are the model parameters to be learned.
model parameters is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Liu, Shujie and Yang, Nan and Li, Mu and Zhou, Ming
Model Training
gym] is the plausible score for the best translation candidate given the model parameters W and V .
Phrase Pair Embedding
Table 1: The relationship between the size of training data and the number of model parameters .
Phrase Pair Embedding
Table 1 shows the relationship between the size of training data and the number of model parameters .
model parameters is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Riezler, Stefan and Simianer, Patrick and Haas, Carolin
Experiments
This can explain the success of response-based learning: Lexical and structural variants of reference translations can be used to boost model parameters towards translations with positive feedback, while the same translations might be considered as negative examples in standard structured learning.
Introduction
Here, leam-ing proceeds by “trying out” translation hypotheses, receiving a response from interacting in the task, and converting this response into a supervision signal for updating model parameters .
Response-based Online Learning
(2010) or Goldwasser and Roth (2013) describe a response-driven learning framework for the area of semantic parsing: Here a meaning representation is “tried out” by itera-tively generating system outputs, receiving feedback from world interaction, and updating the model parameters .
model parameters is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Zhang, Hui and Chiang, David
Introduction
For example, in Expectation-Maximization (Dempster et al., 1977), the Expectation (E) step computes the posterior distribution over possible completions of the data, and the Maximization (M) step reestimates the model parameters as
Word Alignment
Different models parameterize this probability distribution in different ways.
Word Alignment
It also contains most of the model’s parameters and is where overfitting occurs most.
model parameters is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: