Abstract | Unlike traditional realizers using grammar rules, our method realizes sentences by linearizing dependency relations directly in two steps. |
Abstract | First, the relative order between head and each dependent is determined by their dependency relation . |
Abstract | The log-linear models incorporate three types of feature functions, including dependency relations , surface words and headwords. |
Introduction | This paper presents a general-purpose realizer based on log-linear models for directly linearizing dependency relations given dependency structures. |
Introduction | two techniques: the first is dividing the entire dependency tree into one-depth sub-trees and solving linearization in sub-trees; the second is the determination of relative positions between dependents and heads according to dependency relations . |
Introduction | Then the best linearization for each subtree is selected by the log-linear model that incorporates three types of feature functions, including dependency relations , surface words and headwords. |
Relative Position Determination | In dependency structures, the semantic or grammatical roles of the nodes are indicated by types of dependency relations . |
Relative Position Determination | For example, the VOB dependency relation , which stands for the verb-object structure, means that the head is a verb and the dependent is an object of the verb; the ATT relation, means that the dependent is an attribute of the head. |
Sentence Realization from Dependency Structure | In our dependency tree representations, dependency relations are represented as arcs pointing from a head to a dependent. |
Sentence Realization from Dependency Structure | structure 125 794 LAD left adjunct 0 2644 MT mood-tense 3203 0 POB prep-obj 7513 0 QUN quantity 0 6092 RAD right adjunct 1 3 32 1 SBV subject-verb 6 16016 SIM similarity 0 44 VOB verb-obj ect 23487 21 W verb-verb 6570 2 Table 1 : Numbers of pre/post-dependents for each dependency relation |
Abstract | Using an ensemble method, the key information extracted from word pairs with dependency relations in the translated text is effectively integrated into the parser for the target language. |
Conclusion and Future Work | As dependency parsing is concerned with the relations of word pairs, only those word pairs with dependency relations in the translated treebank are |
Dependency Parsing: Baseline | In each step, the classifier checks a word pair, namely, 5, the top of a stack that consists of the processed words, and, i, the first word in the (input) unprocessed sequence, to determine if a dependent relation should be established between them. |
Evaluation Results | The experimental results in (McDonald and Nivre, 2007) show a negative impact on the parsing accuracy from too long dependency relation . |
Exploiting the Translated Treebank | As we cannot expect too much for a word-by-word translation, only word pairs with dependency relation in translated text are extracted as useful and reliable information. |
Exploiting the Translated Treebank | Chinese word should be strictly segmented according to the guideline before POS tags and dependency relations are annotated. |
Exploiting the Translated Treebank | The difference is, rootscore counts for the given POS tag occurring as ROOT, and pairscore counts for two POS tag combination occurring for a dependent relationship . |
Treebank Translation and Dependency Transformation | Bind POS tag and dependency relation of a word with itself; 2. |
Treebank Translation and Dependency Transformation | As word order is often changed after translation, the pointer of each dependency relationship , represented by a serial number, should be recalculated. |
Experiment | Linefeeds are inserted between adjacent bunsetsus which do not depend on each other (Linefeed insertion based on dependency relations ). |
Experiment | This is because, in the correct data, linefeeds were hardly inserted between two neighboring bunsetsus which are in a dependency relation . |
Experiment | However, the precision was low, because, in the baseline 3, linefeeds are invariably inserted between two neighboring bunsetsus which are not in a dependency relation . |
Preliminary Analysis about Linefeed Points | In the analysis, we focused on the clause boundary, dependency relation , line length, pause and morpheme of line head, and investigated the relations between them and linefeed points. |
Preliminary Analysis about Linefeed Points | Next, we focused on the type of the dependency relation , by which the likelihood of linefeed insertion is different. |
Preliminary Analysis about Linefeed Points | onl in which ofthe a to get a [Old][dor¥1estic cars][are covered][magazinejlwriter] [my] [car] [Story about] [ask] : bu nsetsu —> : dependency relation / |
Analysis of reference compressions | The solid arrows indicate dependency relations between words2. |
Analysis of reference compressions | We observe that the dependency relations are changed by compression; humans create compound nouns using the components derived from different portions of the original sentence without regard to syntactic constraints. |
Analysis of reference compressions | 2Generally, a dependency relation is defined between bun-setsu. |
Introduction | Figure 1: An example of the dependency relation variant. |
Related Work | Also, syntactic features such as the dependency relationship of words and subtrees have been shown to effectively improve the performances of sentiment analysis (Kudo and Matsumoto, 2004; Gamon, 2004; Matsumoto et al., 2005; Ng et al., 2006). |
Term Weighting and Sentiment Analysis | Another method is to use proximal information of the query and the word, using syntactic structure such as dependency relations of words that provide the graphical representation of the text (Mullen and Collier, 2004). |
Term Weighting and Sentiment Analysis | Note that proximal features using co-occurrence and dependency relationships were used in previous work. |
Dependency Parsing with HPSG | The dependency backbone extraction works by first identifying the head daughter for each binary grammar rule, and then propagating the head word of the head daughter upwards to their parents, and finally creating a dependency relation , labeled with the HP SG rule name of the parent node, from the head word of the parent to the head word of the non-head daughter. |
Dependency Parsing with HPSG | Therefore, we extend this feature set by adding four more feature categories, which are similar to the original ones, but the dependency relation was replaced by the dependency backbone of the HP S G outputs. |
Experiment Results & Error Analyses | unlabeled dependency relation ), fine-grained HP SG features do help the parser to deal with colloquial sentences, such as “What’s wrong with you?”. |