Conclusions | We presented an improved probabilistic model for canonicalizing named entities into a table. |
Introduction | Here, we use a probabilistic model to infer a struc- |
Model | These choices highlight that the design of a probabilistic model can draw from both Bayesian and discriminative tools. |
Related Work | Our model is focused on the problem of canonicalizing mention strings into their parts, though its 7“ variables (which map mentions to rows) could be interpreted as (within-document and cross-document) coreference resolution, which has been tackled using a range of probabilistic models (Li et al., 2004; Haghighi and Klein, 2007; Poon and Domingos, 2008; Singh et al., 2011). |
Abstract | In this work, deterministic constraints are decoded before the application of probabilistic models , therefore lookahead features are made available during Viterbi decoding. |
Abstract | Since these deterministic constraints are applied before the decoding of probabilistic models , reliably high precision of their predictions is crucial. |
Abstract | However, when tagset BMES is used, the learned constraints don’t always make reliable predictions, and the overall precision is not high enough to constrain a probabilistic model . |
Background and Related Work | Since different derivations may produce the same parse tree, recent work on TSG induction (Post and Gildea, 2009; Cohn et al., 2010) employs a probabilistic model of a TSG and predicts derivations from observed parse trees in an unsupervised way. |
Symbol-Refined Tree Substitution Grammars | 3.1 Probabilistic Model |
Symbol-Refined Tree Substitution Grammars | We define a probabilistic model of an SR-TSG based on the Pitman-Yor Process (PYP) (Pitman and Yor, 1997), namely a sort of nonparametric Bayesian model. |
Conclusions | Therefore, apart from the acoustic and language models used in conventional ASR, HVR also combines the haptic model as well as the PLI model to yield an integrated probabilistic model . |
Integration of Knowledge Sources | Therefore, fl, 5, and 7:1 can be obtained from the respective probabilistic models . |
Introduction | This framework allows coherent probabilistic models of different knowledge sources to be tightly integrated. |