Index of papers in Proc. ACL 2013 that mention
  • parsing model
Joty, Shafiq and Carenini, Giuseppe and Ng, Raymond and Mehdad, Yashar
Introduction
To address this limitation, as the first contribution, we propose a novel document-level discourse parser based on probabilistic discriminative parsing models , represented as Conditional Random Fields (CRFs) (Sutton et al., 2007), to infer the probability of all possible DT constituents.
Introduction
Two separate parsing models could exploit the fact that rhetorical relations are distributed differently intra-sententially vs. multi-sententially.
Our Discourse Parsing Framework
Both of our parsers have the same two components: a parsing model assigns a probability to every possible DT, and a parsing algorithm identifies the most probable DT among the candidate DTs in that scenario.
Our Discourse Parsing Framework
Before describing our parsing models and the parsing algorithm, we introduce some terminology that we will use throughout the paper.
Parsing Models and Parsing Algorithm
The job of our intra-sentential and multi-sentential parsing models is to assign a probability to each of the constituents of all possible DTs at the sentence level and at the document level, respectively.
Parsing Models and Parsing Algorithm
Formally, given the model parameters 9, for each possible constituent R[z’, m, j] in a candidate DT at the sentence or document level, the parsing model estimates P(R[z’, m, j] |@), which specifies a joint distribution over the label R and the structure [i, m, j] of the constituent.
Parsing Models and Parsing Algorithm
4.1 Intra-Sentential Parsing Model
parsing model is mentioned in 18 sentences in this paper.
Topics mentioned in this paper:
Zhang, Meishan and Zhang, Yue and Che, Wanxiang and Liu, Ting
Character-based Chinese Parsing
To produce character-level trees for Chinese NLP tasks, we develop a character-based parsing model , which can jointly perform word segmentation, POS tagging and phrase-structure parsing.
Character-based Chinese Parsing
Our character-based Chinese parsing model is based on the work of Zhang and Clark (2009), which is a transition-based model for lexicalized constituent parsing.
Experiments
The character-level parsing model has the advantage that deep character information can be extracted as features for parsing.
Experiments
Zhang and Clark (2010), and the phrase-structure parsing model of Zhang and Clark (2009).
Experiments
The phrase-structure parsing model is trained with a 64-beam.
Introduction
We build a character-based Chinese parsing model to parse the character-level syntax trees.
Related Work
Our character-level parsing model is inspired by the work of Zhang and Clark (2009), which is a transition-based model with a beam-search decoder for word-based constituent parsing.
parsing model is mentioned in 12 sentences in this paper.
Topics mentioned in this paper:
Liu, Kai and Lü, Yajuan and Jiang, Wenbin and Liu, Qun
Bilingually-Guided Dependency Grammar Induction
Use the parsing model to build new treebank on target language for next iteration.
Bilingually-Guided Dependency Grammar Induction
With this approach, we can optimize the mixed parsing model by maximizing the objective in Formula (9).
Introduction
We evaluate the final automatically-induced dependency parsing model on 5 languages.
Unsupervised Dependency Grammar Induction
And the framework of our unsupervised model builds a random treebank on the monolingual corpus firstly for initialization and trains a discriminative parsing model on it.
Unsupervised Dependency Grammar Induction
Algorithm 1 outlines the unsupervised training in its entirety, where the treebank DE and unsupervised parsing model with A are updated iteratively.
Unsupervised Dependency Grammar Induction
In line 1 we build a random treebank DE on the monolingual corpus, and then train the parsing model with it (line 2) through a training procedure train(-, which needs DE and 15F as classification instances.
parsing model is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Liu, Yang
Introduction
3 A Maximum Entropy Based Shift-Reduce Parsing Model
Introduction
1. relative frequencies in two directions; 2. lexical weights in two directions; 3. phrase penalty; 4. distance-based reordering model; 5. lexicaized reordering model; 6. n-gram language model model; 7. word penalty; 8. ill-formed structure penalty; 9. dependency language model; 10. maximum entropy parsing model .
Introduction
Table 3: Contribution of maximum entropy shift-reduce parsing model .
parsing model is mentioned in 5 sentences in this paper.
Topics mentioned in this paper: