Index of papers in Proc. ACL 2011 that mention
  • graphical model
DeNero, John and Macherey, Klaus
Abstract
This paper presents a graphical model that embeds two directional aligners into a single model.
Conclusion
We have presented a graphical model that combines two classical HMM-based alignment models.
Introduction
This result is achieved by embedding two directional HMM-based alignment models into a larger bidirectional graphical model .
Model Definition
Our bidirectional model Q = (12,13) is a globally normalized, undirected graphical model of the word alignment for a fixed sentence pair (6, f Each vertex in the vertex set V corresponds to a model variable Vi, and each undirected edge in the edge set D corresponds to a pair of variables (W, Each vertex has an associated potential function w, that assigns a real-valued potential to each possible value v,- of 16.1 Likewise, each edge has an associated potential function gig-(vi, 213-) that scores pairs of values.
Model Definition
Figure l: The structure of our graphical model for a simple sentence pair.
Model Inference
In general, graphical models admit efficient, exact inference algorithms if they do not contain cycles.
Model Inference
While the entire graphical model has loops, there are two overlapping subgraphs that are cycle-free.
Model Inference
To describe a dual decomposition inference procedure for our model, we first restate the inference problem under our graphical model in terms of the two overlapping subgraphs that admit tractable inference.
Related Work
Although differing in both model and inference, our work and theirs both find improvements from defining graphical models for alignment that do not admit exact polynomial-time inference algorithms.
graphical model is mentioned in 12 sentences in this paper.
Topics mentioned in this paper:
Lee, John and Naradowsky, Jason and Smith, David A.
Baselines
To ensure a meaningful comparison with the joint model, our two baselines are both implemented in the same graphical model framework, and trained with the same machine-leaming algorithm.
Baselines
The tagger is a graphical model with the WORD and TAG variables, connected by the local factors TAG-UNIGRAM, TAG-BIGRAM, and TAG-CONSISTENCY, all used in the joint model (ยง3).
Experimental Setup
To illustrate the effect, the graphical model of the sentence in Table 1, whose six words are all covered by the database, has 1,866 factors; without the benefit of the database, the full model would have 31,901 factors.
Joint Model
It will be presented as a graphical model,
graphical model is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Singh, Sameer and Subramanya, Amarnag and Pereira, Fernando and McCallum, Andrew
Introduction
Other previous work attempts to address some of the above concerns by mapping coreference to inference on an undirected graphical model (Culotta et al., 2007; Poon et al., 2008; Wellner et al., 2004; Wick et al., 2009a).
Introduction
In this work we first distribute MCMC-based inference for the graphical model representation of coreference.
Related Work
Our representation of the problem as an undirected graphical model , and performing distributed inference on it, provides a combination of advantages not available in any of these approaches.
Related Work
In addition to representing features from all of the related work, graphical models can also use more complex entity-wide features (Culotta et al., 2007; Wick et al., 2009a), and parameters can be learned using supervised (Collins, 2002) or semi-supervised techniques (Mann and McCallum, 2008).
graphical model is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Hoffmann, Raphael and Zhang, Congle and Ling, Xiao and Zettlemoyer, Luke and Weld, Daniel S.
Introduction
o MULTIR introduces a probabilistic, graphical model of multi-instance learning which handles overlapping relations.
Modeling Overlapping Relations
We define an undirected graphical model that allows joint reasoning about aggregate (corpus-level) and sentence-level extraction decisions.
Related Work
(2010), combine weak supervision and multi-instance learning in a more sophisticated manner, training a graphical model , which assumes only that at least one of the matches between the arguments of a Freebase fact and sentences in the corpus is a true relational mention.
graphical model is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: