Index of papers in Proc. ACL 2013 that mention
  • model parameters
Gormley, Matthew R. and Eisner, Jason
Abstract
Finding the optimal model parameters is then usually a difficult nonconvex optimization problem.
Abstract
We search for the maximum-likelihood model parameters and corpus parse, subject to posterior constraints.
Introduction
The node branches on a single model parameter 6m to partition its subspace.
Introduction
A variety of ways to find better local optima have been explored, including heuristic initialization of the model parameters (Spitkovsky et al., 2010a), random restarts (Smith, 2006), and annealing (Smith and Eisner, 2006; Smith, 2006).
Introduction
search with certificates of e-optimality for both the corpus parse and the model parameters .
The Constrained Optimization Task
The nonlinear constraints ensure that the model parameters are true log-probabilities.
The Constrained Optimization Task
Feature / model parameter index Sentence index
The Constrained Optimization Task
Conditional distribution index Number of model parameters
model parameters is mentioned in 22 sentences in this paper.
Topics mentioned in this paper:
Lei, Tao and Long, Fan and Barzilay, Regina and Rinard, Martin
Model
We assume the generative model operates by first generating the model parameters from a set of Dirichlet distributions.
Model
0 Generating Model Parameters : For every pair of feature type f and phrase tag 2, draw a multinomial distribution parameter 63 from a Dirichlet prior P(6§;).
Model
Learning the Model During inference, we want to estimate the hidden specification trees 1: given the observed natural language specifications w, after integrating the model parameters out, i.e.
model parameters is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Sogaard, Anders
Robust perceptron learning
Antagonistic adversaries choose transformations informed by the current model parameters w, but random adversaries randomly select transformations from a predefined set of possible transformations, e.g.
Robust perceptron learning
In an online setting feature bagging can be modelled as a game between a learner and an adversary, in which (a) the adversary can only choose between deleting transformations, (b) the adversary cannot see model parameters when choosing a transformation, and in which (c) the adversary only moves in between passes over the data.1
Robust perceptron learning
LRA is an adversarial game in which the two players are unaware of the other player’s current move, and in particular, where the adversary does not see model parameters and only randomly corrupts the data points.
model parameters is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Berg-Kirkpatrick, Taylor and Durrett, Greg and Klein, Dan
Model
The Bernoulli parameter of a pixel inside a glyph bounding box depends on the pixel’s location inside the box (as well as on di and 21,-, but for simplicity of exposition, we temporarily suppress this dependence) and on the model parameters governing glyph shape (for each character type c, the parameter matrix gbc specifies the shape of the character’s glyph.)
Results and Analysis
(2010), we use a regularization term in the optimization of the log-linear model parameters (15¢ during the M-step.
Results and Analysis
Figure 8: The central glyph is a representation of the initial model parameters for the glyph shape for g, and surrounding this are the learned parameters for documents from various years.
model parameters is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Metallinou, Angeliki and Bohus, Dan and Williams, Jason
Generative state tracking
where ¢,(x, y) are feature functions jointly defined on features and labels, and A, are the model parameters .
Generative state tracking
This formulation also decouples the number of models parameters (i.e.
Generative state tracking
Second, model parameters in DISCIND are trained independently of competing hypotheses.
model parameters is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Ravi, Sujith
Decipherment Model for Machine Translation
During decipherment training, our objective is to estimate the model parameters in order to maximize the probability of the source text f as suggested by Ravi and Knight (2011b).
Decipherment Model for Machine Translation
Instead, we propose a new Bayesian inference framework to estimate the translation model parameters .
Introduction
The parallel corpora are used to estimate translation model parameters involving word-to-word translation tables, fertilities, distortion, phrase translations, syntactic transformations, etc.
model parameters is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: