Abstract | Our method, thus, requires gold standard trees only on the source side of a bilingual corpus in the training phase, unlike the joint parsing model , which requires gold standard trees on the both sides. |
Bilingual subtree constraints | Finally, we design the bilingual subtree features based on the mapping rules for the parsing model . |
Bilingual subtree constraints | However, as described in Section 4.3.1, the generated subtrees are verified by looking up list ST): before they are used in the parsing models . |
Conclusion | based parsing models (Nivre, 2003; Yamada and Matsumoto, 2003). |
Dependency parsing | For dependency parsing, there are two main types of parsing models (Nivre and McDonald, 2008; Nivre and Kubler, 2006): transition-based (Nivre, 2003; Yamada and Matsumoto, 2003) and graph-based (McDonald et al., 2005; Carreras, 2007). |
Dependency parsing | Our approach can be applied to both parsing models . |
Dependency parsing | In this paper, we employ the graph-based MST parsing model proposed by McDonald and Pereira |
Introduction | Based on the mapping rules, we design a set of features for parsing models . |
Base Models | When segment length is not restricted, the inference procedure is the same as that used in parsing (Finkel and Manning, 2009c).3 In this work we do not enforce a length restriction, and directly utilize the fact that the model can be transformed into a parsing model . |
Base Models | Our parsing model is the discriminatively trained, conditional random field-based context-free grammar parser (CRF-CFG) of (Finkel et al., 2008). |
Base Models | In the parsing model , the grammar consists of only the rules observed in the training data. |
Evaluation | We used the hybrid parsing model described in Clark and Curran (2007), and the Viterbi decoder to find the highest-scoring derivation. |
Evaluation | For the biomedical parser evaluation we have used the parsing model and grammatical relation conversion script from Rimell and Clark (2009). |
Results | In the first two experiments, we explore performance on the newswire domain, which is the source of training data for the parsing model and the baseline supertagging model. |