Bilingual subtree constraints | We use large-scale auto-parsed data to obtain subtrees on the target side. |
Bilingual subtree constraints | Then we generate the mapping rules to map the source subtrees onto the extracted target subtrees . |
Bilingual subtree constraints | These features indicate the information of the constraints between bilingual subtrees , that are called bilingual subtree constraints. |
Dependency parsing | We design bilingual subtree features, as described in Section 4, based on the constraints between the source subtrees and the target subtrees that are verified by the subtree list on the target side. |
Dependency parsing | The source subtrees are from the possible dependency relations. |
Introduction | The subtrees are extracted from large-scale auto-parsed monolingual data on the target side. |
Adaptor Grammars | Adaptor grammars are an example of this approach (Johnson et al., 2007b), where entire subtrees generated by a “base grammar” can be viewed as distinct rules (in that we learn a separate probability for each subtree). |
Adaptor Grammars | The inference task is nonparametric if there are an unbounded number of such subtrees . |
Adaptor Grammars | (Word s i) (Word d 6) (Word b u k) Because the Word nonterminal is adapted (indicated here by underlining) the adaptor grammar learns the probability of the entire Word subtrees (e.g., the probability that b a k is a Word); see Johnson (2008) for further details. |
Analysis Scheme | We use “term” to refer to text expressions, and “components” to refer to nodes, edges, and subtrees . |
Integrating Discourse References into Entailment Recognition | Figure 1: The Substitution transformation, demonstrated on the relevant subtrees of Example (i). |
Integrating Discourse References into Entailment Recognition | For each bridging relation, it adds a specific subtrees 87" via an edge labeled with labr. |