Index of papers in Proc. ACL 2010 that mention
  • dependency relations
Chen, Wenliang and Kazama, Jun'ichi and Torisawa, Kentaro
Bilingual subtree constraints
At first, we have a possible dependency relation (represented as a source subtree) of words to be verified.
Bilingual subtree constraints
If yes, we activate a positive feature to encourage the dependency relation .
Bilingual subtree constraints
ing the dependency relation indicated in the target parts.
Dependency parsing
The source subtrees are from the possible dependency relations .
Motivation
Suppose that we have an input sentence pair as shown in Figure l, where the source sentence is in English, the target is in Chinese, the dashed undirected links are word alignment links, and the directed links between words indicate that they have a (candidate) dependency relation .
Motivation
By adding “fork”, we have two possible dependency relations , “meat-with-fork” and “ate-with-for ”, to be verified.
dependency relations is mentioned in 10 sentences in this paper.
Topics mentioned in this paper:
Dickinson, Markus
Ad hoc rule detection
On a par with constituency rules, we define a grammar rule as a dependency relation rewriting as a head with its sequence of POS/dependent pairs (cf.
Ad hoc rule detection
Units of comparison To determine similarity, one can compare dependency relations , POS tags, or both.
Ad hoc rule detection
Thus, we use the pairs of dependency relations and POS tags as the units of comparison.
Additional information
We extract POS pairs, note their dependency relation , and add a LR to the label to indicate which is the head (Boyd et al., 2008).
Evaluation
We can measure this by scoring each testing data position below the threshold as a 1 if it has the correct head and dependency relation and a 0 otherwise.
Evaluation
For example, the parsed rule TA —> IG:IG RO has a correct dependency relation (IG) between the POS tags IG and its head RO, yet is assigned a whole rule score of 2 and a bigram score of 20.
dependency relations is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Sassano, Manabu and Kurohashi, Sadao
Abstract
We propose active learning methods of using partial dependency relations in a given sentence for parsing and evaluate their effectiveness empirically.
Active Learning for Japanese Dependency Parsing
annotators would label either “D” for the two bunsetsu having a dependency relation or “O”, which represents the two does not.
Conclusion
In addition, as far as we know, we are the first to propose the active learning methods of using partial dependency relations in a given sentence for parsing and we have evaluated the effectiveness of our methods.
Experimental Evaluation and Discussion
That is “0” does not simply mean that two bunsetsus does not have a dependency relation .
Experimental Evaluation and Discussion
Issues on Accessing the Total Cost of Annotation In this paper, we assume that each annotation cost for dependency relations is constant.
Japanese Parsing
When we use this algorithm with a machine learning-based classifier, function Dep() in Figure 3 uses the classifier to decide whether two bunsetsus have a dependency relation .
dependency relations is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Jiang, Wenbin and Liu, Qun
Abstract
And we also propose an effective strategy for dependency projection, where the dependency relationships of the word pairs in the source language are projected to the word pairs of the target language, leading to a set of classification instances rather than a complete tree.
Introduction
Given a word-aligned bilingual corpus with source language sentences parsed, the dependency relationships of the word pairs in the source language are projected to the word pairs of the target language.
Introduction
A dependency relationship is a boolean value that represents whether this word pair forms a dependency edge.
Projected Classification Instance
We define a boolean-valued function 6 (y, i, j, 7“) to investigate the dependency relationship of word 2' and word j in parse tree y:
Word-Pair Classification Model
Here we give the calculation of dependency probability C (7', j We use w to denote the parameter vector of the ME model, and f (7', j, 7“) to denote the feature vector for the assumption that the word pair 7' and j has a dependency relationship 7“.
dependency relations is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Thater, Stefan and Fürstenau, Hagen and Pinkal, Manfred
Introduction
We go one step further, however, in that we employ syntactically enriched vector models as the basic meaning representations, assuming a vector space spanned by combinations of dependency relations and words (Lin, 1998).
The model
Figure 1 shows the co-occurrence graph of a small sample corpus of dependency trees: Words are represented as nodes in the graph, possible dependency relations between them are drawn as labeled edges, with weights corresponding to the observed frequencies.
The model
Such a path is characterized by two dependency relations and two words, i.e., a quadruple (r,w’,r’,w” whose weight is the product of the weights of the two edges used in the path.
The model
To avoid overly sparse vectors we generalize over the “middle word” w’ and build our second-order vectors on the dimensions corresponding to triples (r, r’ ,w” ) of two dependency relations and one word at the end of the two-step path.
dependency relations is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Kazama, Jun'ichi and De Saeger, Stijn and Kuroda, Kow and Murata, Masaki and Torisawa, Kentaro
Experiments
Dependency relations are used as context profiles as in Kazama and Torisawa (2008) and Kazama et al.
Experiments
For example, we extract a dependency relation (7% V,( E ’3 , 75: from the sentence below, where a postposition “75: (wo)” is used to mark the verb object.
Experiments
(2009) proposed using the J ensen-Shannon divergence between hidden class distributions, p(c|w1) and p(c|w2), which are obtained by using an EM-based clustering of dependency relations with a model p(wi,fk) = Zcp(wilc)p(fklc)p(c) (Kazama and Torisawa, 2008).
Introduction
Each dimension of the vector corresponds to a context, fk, which is typically a neighboring word or a word having dependency relations with 212,- in a corpus.
dependency relations is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Yeniterzi, Reyyan and Oflazer, Kemal
Experimental Setup and Results
The sentence representations in the middle part of Figure 2 show these sentences with some of the dependency relations (relevant to our transformations) extracted by the parser, explicitly marked as labeled links.
Syntax-to-Morphology Mapping
When we tag and syntactically analyze the En-lish side into dependency relations , and morpho-)gically analyze and disambiguate the Turkish hrase, we get the representation in the middle of igure 1, where we have co-indexed components at should map to each other, and some of the vntactic relations that the function words are in-olved in are marked with dependency links.1
Syntax-to-Morphology Mapping
Here <x>, <Y> and <z> can be considered as Prolog like-variables that bind to patterns (mostly root words), and the conditions check for specified dependency relations (e.g.,PMOD) between the left and the right sides.
dependency relations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: