Evaluation | Also note that the syntactic annotation of English and Czech in PCEDT 2.0 is quite similar (to the extent permitted by the difference in the structure of the two languages) and we can use the dependency relations in our experiments. |
Model Transfer | This setup requires that we use the same feature representation for both languages, for example part-of-speech tags and dependency relation labels should be from the same inventory. |
Model Transfer | Most dependency relation inventories are language-specific, and finding a shared representation for them is a challenging problem. |
Model Transfer | One could map dependency relations into a simplified form that would be shared between languages, as it is done for part-of-speech tags in Petrov et al. |
Results | First of all, both EN-CZ and CZ-EN benefit noticeably from the use of the original syntactic annotation, including dependency relations , but not from the transferred syntax, most likely due to the low syntactic transfer performance. |
Approach | tweet dependency relation |
Approach | Then, each tweet is paired with each dependency relation in the tweet, which is a candidate of problem/aid nuclei and given to the problem report and aid message recognizers. |
Approach | We observed that problem reports in general included either of (A) a dependency relation between a noun referring to some trouble and an excitatory template or (B) a dependency relation between a noun not referring to any trouble and an inhibitory template. |
Experiments | 5The original similarity was defined over noun pairs and it was estimated from dependency relations . |
Experiments | Obtaining similarity between template pairs, not noun pairs, is straightforward given the same dependency relations . |
Introduction | An underlying assumption of our method is that we can find a noun-predicate dependency relation that works as an indicator of problems and aids in problem reports and aid messages, which we refer to as problem nucleus and aid nucleus.1 An example of problem nucleus is “infant formula is sold out” in P1, and that of aid nucleus is “(can) buy infant formula” in A1. |
Using the Framework | We first use a dependency parser (de Mameffe et al., 2006) to parse each sentence and extract the set of dependency relations associated with the sentence. |
Using the Framework | For example, the sentence “I adore tennis” is represented by the dependency relations (nsubj: adore, I) and (dobj: adore, tennis). |
Using the Framework | Each sentence represents a single node u in the graph (unless otherwise specified) and is comprised of a set of dependency relations (or ngrams) present in the sentence. |
Coordination Structures in Treebanks | it is not possible to represent nested CSs in the Moscow and Stanford families without significantly changing the number of possible labels.17 The dL style (which is most easily applicable to the Prague family) can represent coordination of different dependency relations . |
Coordination Structures in Treebanks | (2012), the normalization procedures used in HamleDT embrace many other phenomena as well (not only those related to coordination), and involve both structural transformation and dependency relation relabeling. |
Introduction | For example, if a noun is modified by two coordinated adjectives, there is a (symmetric) coordination relation between the two conjuncts and two (asymmetric) dependency relations between the conjuncts and the noun. |
Variations in representing coordination structures | Besides that, there should be a label classifying the dependency relation between the CS and its parent. |
Variations in representing coordination structures | The dependency relation of the whole CS to its parent is represented by the label of the conjunction, while the conjuncts are marked with a special label for conjuncts (e. g. cco f in the Hyderabad Dependency Treebank). |
Variations in representing coordination structures | Subsequently, each conjunct has its own label that reflects the dependency relation towards the parent of the whole CS, therefore, conjuncts of the same CS can have different labels, e.g. |
Experiments | 14MAll’s features are similar to part-of-speech tags and untyped dependency relations . |
Our framework | Although regular “untyped” dependency relations do not seem to help our QSD system in the presence of phrase-structure trees, we found the col- |
Our framework | 0 Type of incoming dependency relation of each noun 0 Syntactic category of the deepest common ancestor o Lexical item of the deepest common ancestor 0 Length of the undirected path between the two |
Related work | dependency relations ) are used on both the train and the test data. |
Architecture of BRAINSUP | Candidate fillers for each empty position (slot) in the patterns are chosen according to the lexical and syntactic constraints enforced by the dependency relations in the patterns. |
Architecture of BRAINSUP | To achieve that, we analyze a large corpus of parsed sentences L3 and store counts of observed head-relation-modifier ((h, r, 771)) dependency relations . |
Architecture of BRAINSUP | a dependency relation . |
Bilingual Projection of Dependency Grammar | Take Figure 2 as an example, following the Direct Projection Algorithm (DPA) (Hwa et a1., 2005) (Section 5), the dependency relationships between words can be directly projected from the source |
Related work | The dependency projection method DPA (Hwa et a1., 2005) based on Direct Correspondence Assumption (Hwa et a1., 2002) can be described as: if there is a pair of source words with a dependency relationship, the corresponding aligned words in target sentence can be considered as having the same dependency relationship equivalent-1y (e.g. |
Unsupervised Dependency Grammar Induction | denotes the word pair dependency relationship (e;- —> 63-). |
Introduction | In recent years, syntactic representations based on head-modifier dependency relations between words have attracted a lot of interest (Kubler et al., 2009). |
Towards A Universal Treebank | This mainly consisted in relabeling dependency relations and, due to the fine-grained label set used in the Swedish Treebank (Teleman, 1974), this could be done with high precision. |
Towards A Universal Treebank | Such a reduction may ultimately be necessary also in the case of dependency relations , but since most of our data sets were created through manual annotation, we could afford to retain a fine-grained analysis, knowing that it is always possible to map from finer to coarser distinctions, but not vice versa.4 |
Surface Realization | dependency relation of indicator and argument |
Surface Realization | constituent tag of indsrc with constituent tag of indta'" constituent tag of argsm with constituent tag of argt‘" transformation of indsrc/argsm combined with constituent tag dependency relation of indsm and argsrc |
Surface Realization | dependency relation of indta’" and argt‘" |
Constraints for Prior Knowledge | 4.3 Dependency Relation Constraints |
Constraints for Prior Knowledge | Another set of constraints involves dependency relations , including subject—verb relation and determiner—noun relation. |
Experiments | In Section 4, three sets of constraints are introduced: modification count (MC), article-noun agreement (ANA), and dependency relation (DR) constraints. |