Abstract | We use linear-chain conditional random fields (CRF) for sentence type tagging, and a 2D CRF to label the dependency relation between sentences. |
Introduction | Towards this goal, in this paper, we define two tasks: labeling the types for sentences, and finding the dependency relations between sentences. |
Introduction | In this study, we use two approaches for labeling of dependency relation between sentences. |
Introduction | Our experimental results show that our proposed sentence type tagging method works very well, even for the minority categories, and that using 2D CRF further improves performance over linear-chain CRFs for identifying dependency relation between sentences. |
Related Work | In this paper, in order to provide a better foundation for question answer detection in online forums, we investigate tagging sentences with a much richer set of categories, as well as identifying their dependency relationships . |
Thread Structure Tagging | Knowing only the sentence types without their dependency relations is not enough for question answering tasks. |
Thread Structure Tagging | Note that sentence dependency relations might not be a one-to-one relation. |
Thread Structure Tagging | Dependency relationship could happen between many different types of sentences, for example, answer(s) to question(s), problem clarification to question inquiry, feedback to solutions, etc. |
Abstract | Unlike traditional realizers using grammar rules, our method realizes sentences by linearizing dependency relations directly in two steps. |
Abstract | First, the relative order between head and each dependent is determined by their dependency relation . |
Abstract | The log-linear models incorporate three types of feature functions, including dependency relations , surface words and headwords. |
Introduction | This paper presents a general-purpose realizer based on log-linear models for directly linearizing dependency relations given dependency structures. |
Introduction | two techniques: the first is dividing the entire dependency tree into one-depth sub-trees and solving linearization in sub-trees; the second is the determination of relative positions between dependents and heads according to dependency relations . |
Introduction | Then the best linearization for each subtree is selected by the log-linear model that incorporates three types of feature functions, including dependency relations , surface words and headwords. |
Relative Position Determination | In dependency structures, the semantic or grammatical roles of the nodes are indicated by types of dependency relations . |
Relative Position Determination | For example, the VOB dependency relation , which stands for the verb-object structure, means that the head is a verb and the dependent is an object of the verb; the ATT relation, means that the dependent is an attribute of the head. |
Sentence Realization from Dependency Structure | In our dependency tree representations, dependency relations are represented as arcs pointing from a head to a dependent. |
Sentence Realization from Dependency Structure | structure 125 794 LAD left adjunct 0 2644 MT mood-tense 3203 0 POB prep-obj 7513 0 QUN quantity 0 6092 RAD right adjunct 1 3 32 1 SBV subject-verb 6 16016 SIM similarity 0 44 VOB verb-obj ect 23487 21 W verb-verb 6570 2 Table 1 : Numbers of pre/post-dependents for each dependency relation |
Abstract | This general framework allows us to use arbitrary similarity functions between items, and to incorporate different information in our comparison, such as n-grams, dependency relations , etc. |
Introduction | In this paper, we propose a new automatic MT evaluation metric, MAXSIM, that compares a pair of system-reference sentences by extracting n-grams and dependency relations . |
Introduction | Recognizing that different concepts can be expressed in a variety of ways, we allow matching across synonyms and also compute a score between two matching items (such as between two n-grams or between two dependency relations ), which indicates their degree of similarity with each other. |
Introduction | Also, this framework allows for defining arbitrary similarity functions between two matching items, and we could match arbitrary concepts (such as dependency relations ) gathered from a sentence pair. |
Metric Design Considerations | Hence, using information such as synonyms or dependency relations could potentially address the issue better. |
Metric Design Considerations | 4.2 Dependency Relations |
Metric Design Considerations | Hence, besides matching based on n-gram strings, we can also match other “information items”, such as dependency relations . |
Background | A dependency relation defines a binary relation that describes whether a pairwise syntactic relation among two words holds in a sentence. |
Background | In ReNew, we exploit the Stanford typed dependency representations (de Marneffe et al., 2006) that use triples to formalize dependency relations . |
Background | tains three lists of dependency relations , associated respectvely with positive, neutral, or negative sentiment. |
Experiments | To do this, we first divide all features into four basic feature sets: T (transition cues), P (punctuations, special name-entities, and segment positions), G (grammar), and 0D (opinion words and dependency relations ). |
Framework | Third, the lexicon generator determines which newly learned dependency relation triples to promote to the lexicon. |
Framework | For each sentiment, the Triple Extractor (TE) extracts candidate dependency relation triples using a novel rule-based approach. |
Framework | Table l: Dependency relation types used in extracting (E) and domain-specific lexicon (L). |
Introduction | (3) To capture the contextual sentiment of words, ReNew uses dependency relation pairs as the basic elements in the generated sentiment lexicon. |
Introduction | After classifying the sentiment of Segment 5 as NEG, we associate the dependency relation pairs {“sign”, “wear”} and {“sign”, “tear”} with that sentiment. |
Bilingual subtree constraints | At first, we have a possible dependency relation (represented as a source subtree) of words to be verified. |
Bilingual subtree constraints | If yes, we activate a positive feature to encourage the dependency relation . |
Bilingual subtree constraints | ing the dependency relation indicated in the target parts. |
Dependency parsing | The source subtrees are from the possible dependency relations . |
Motivation | Suppose that we have an input sentence pair as shown in Figure l, where the source sentence is in English, the target is in Chinese, the dashed undirected links are word alignment links, and the directed links between words indicate that they have a (candidate) dependency relation . |
Motivation | By adding “fork”, we have two possible dependency relations , “meat-with-fork” and “ate-with-for ”, to be verified. |
Evaluation | Also note that the syntactic annotation of English and Czech in PCEDT 2.0 is quite similar (to the extent permitted by the difference in the structure of the two languages) and we can use the dependency relations in our experiments. |
Model Transfer | This setup requires that we use the same feature representation for both languages, for example part-of-speech tags and dependency relation labels should be from the same inventory. |
Model Transfer | Most dependency relation inventories are language-specific, and finding a shared representation for them is a challenging problem. |
Model Transfer | One could map dependency relations into a simplified form that would be shared between languages, as it is done for part-of-speech tags in Petrov et al. |
Results | First of all, both EN-CZ and CZ-EN benefit noticeably from the use of the original syntactic annotation, including dependency relations , but not from the transferred syntax, most likely due to the low syntactic transfer performance. |
Approach | tweet dependency relation |
Approach | Then, each tweet is paired with each dependency relation in the tweet, which is a candidate of problem/aid nuclei and given to the problem report and aid message recognizers. |
Approach | We observed that problem reports in general included either of (A) a dependency relation between a noun referring to some trouble and an excitatory template or (B) a dependency relation between a noun not referring to any trouble and an inhibitory template. |
Experiments | 5The original similarity was defined over noun pairs and it was estimated from dependency relations . |
Experiments | Obtaining similarity between template pairs, not noun pairs, is straightforward given the same dependency relations . |
Introduction | An underlying assumption of our method is that we can find a noun-predicate dependency relation that works as an indicator of problems and aids in problem reports and aid messages, which we refer to as problem nucleus and aid nucleus.1 An example of problem nucleus is “infant formula is sold out” in P1, and that of aid nucleus is “(can) buy infant formula” in A1. |
Abstract | Using an ensemble method, the key information extracted from word pairs with dependency relations in the translated text is effectively integrated into the parser for the target language. |
Conclusion and Future Work | As dependency parsing is concerned with the relations of word pairs, only those word pairs with dependency relations in the translated treebank are |
Dependency Parsing: Baseline | In each step, the classifier checks a word pair, namely, 5, the top of a stack that consists of the processed words, and, i, the first word in the (input) unprocessed sequence, to determine if a dependent relation should be established between them. |
Evaluation Results | The experimental results in (McDonald and Nivre, 2007) show a negative impact on the parsing accuracy from too long dependency relation . |
Exploiting the Translated Treebank | As we cannot expect too much for a word-by-word translation, only word pairs with dependency relation in translated text are extracted as useful and reliable information. |
Exploiting the Translated Treebank | Chinese word should be strictly segmented according to the guideline before POS tags and dependency relations are annotated. |
Exploiting the Translated Treebank | The difference is, rootscore counts for the given POS tag occurring as ROOT, and pairscore counts for two POS tag combination occurring for a dependent relationship . |
Treebank Translation and Dependency Transformation | Bind POS tag and dependency relation of a word with itself; 2. |
Treebank Translation and Dependency Transformation | As word order is often changed after translation, the pointer of each dependency relationship , represented by a serial number, should be recalculated. |
Ad hoc rule detection | On a par with constituency rules, we define a grammar rule as a dependency relation rewriting as a head with its sequence of POS/dependent pairs (cf. |
Ad hoc rule detection | Units of comparison To determine similarity, one can compare dependency relations , POS tags, or both. |
Ad hoc rule detection | Thus, we use the pairs of dependency relations and POS tags as the units of comparison. |
Additional information | We extract POS pairs, note their dependency relation , and add a LR to the label to indicate which is the head (Boyd et al., 2008). |
Evaluation | We can measure this by scoring each testing data position below the threshold as a 1 if it has the correct head and dependency relation and a 0 otherwise. |
Evaluation | For example, the parsed rule TA —> IG:IG RO has a correct dependency relation (IG) between the POS tags IG and its head RO, yet is assigned a whole rule score of 2 and a bigram score of 20. |
Add arc <eC,ej> to GC with | For example, The sixth feature in Table 5 represents that the dependency relation is preferred to be labeled Explanation with the fact that “because” is the first word of the dependent EDU. |
Discourse Dependency Structure and Tree Bank | Similar to the syntactic dependency structure defined by McDonald (2005a, 2005b), we insert an artificial EDU e0 in the beginning for each document and label the dependency relation linking from 60 as ROOT. |
Discourse Dependency Structure and Tree Bank | A labeled directed arc is used to represent the dependency relation from one head to its dependent. |
Discourse Dependency Structure and Tree Bank | Then, discourse dependency structure can be formalized as the labeled directed graph, Where nodes correspond to EDUs and labeled arcs correspond to labeled dependency relations . |
Introduction | Here is the basic idea: the discourse structure consists of EDUs which are linked by the binary, asymmetrical relations called dependency relations . |
Introduction | A dependency relation holds between a subordinate EDU called the dependent, and another EDU on |
Dependency-based Pre-ordering Rule Set | Here, both x and y are dependency relations (e.g., plmod or lobj in Figure 2). |
Dependency-based Pre-ordering Rule Set | We define the dependency structure of a dependency relation as the structure containing the dependent word (e. g., the word directly indicated by plmod, or “El?” in Figure 2) and the whole subtree under the dependency relation (all of the words that directly or indirectly depend on the dependent word, or the words under “El?” in Figure 2). |
Dependency-based Pre-ordering Rule Set | Further, we define X and Y as the corresponding dependency structures of the dependency relations x and y, respectively. |
Experiment | Linefeeds are inserted between adjacent bunsetsus which do not depend on each other (Linefeed insertion based on dependency relations ). |
Experiment | This is because, in the correct data, linefeeds were hardly inserted between two neighboring bunsetsus which are in a dependency relation . |
Experiment | However, the precision was low, because, in the baseline 3, linefeeds are invariably inserted between two neighboring bunsetsus which are not in a dependency relation . |
Preliminary Analysis about Linefeed Points | In the analysis, we focused on the clause boundary, dependency relation , line length, pause and morpheme of line head, and investigated the relations between them and linefeed points. |
Preliminary Analysis about Linefeed Points | Next, we focused on the type of the dependency relation , by which the likelihood of linefeed insertion is different. |
Preliminary Analysis about Linefeed Points | onl in which ofthe a to get a [Old][dor¥1estic cars][are covered][magazinejlwriter] [my] [car] [Story about] [ask] : bu nsetsu —> : dependency relation / |
Abstract | We propose active learning methods of using partial dependency relations in a given sentence for parsing and evaluate their effectiveness empirically. |
Active Learning for Japanese Dependency Parsing | annotators would label either “D” for the two bunsetsu having a dependency relation or “O”, which represents the two does not. |
Conclusion | In addition, as far as we know, we are the first to propose the active learning methods of using partial dependency relations in a given sentence for parsing and we have evaluated the effectiveness of our methods. |
Experimental Evaluation and Discussion | That is “0” does not simply mean that two bunsetsus does not have a dependency relation . |
Experimental Evaluation and Discussion | Issues on Accessing the Total Cost of Annotation In this paper, we assume that each annotation cost for dependency relations is constant. |
Japanese Parsing | When we use this algorithm with a machine learning-based classifier, function Dep() in Figure 3 uses the classifier to decide whether two bunsetsus have a dependency relation . |
Coordination Structures in Treebanks | it is not possible to represent nested CSs in the Moscow and Stanford families without significantly changing the number of possible labels.17 The dL style (which is most easily applicable to the Prague family) can represent coordination of different dependency relations . |
Coordination Structures in Treebanks | (2012), the normalization procedures used in HamleDT embrace many other phenomena as well (not only those related to coordination), and involve both structural transformation and dependency relation relabeling. |
Introduction | For example, if a noun is modified by two coordinated adjectives, there is a (symmetric) coordination relation between the two conjuncts and two (asymmetric) dependency relations between the conjuncts and the noun. |
Variations in representing coordination structures | Besides that, there should be a label classifying the dependency relation between the CS and its parent. |
Variations in representing coordination structures | The dependency relation of the whole CS to its parent is represented by the label of the conjunction, while the conjuncts are marked with a special label for conjuncts (e. g. cco f in the Hyderabad Dependency Treebank). |
Variations in representing coordination structures | Subsequently, each conjunct has its own label that reflects the dependency relation towards the parent of the whole CS, therefore, conjuncts of the same CS can have different labels, e.g. |
Abstract | Experiments show that web-scale data improves statistical dependency parsing, particularly for long dependency relationships . |
Conclusion | The results show that web-scale data improves the dependency parsing, particularly for long dependency relationships . |
Experiments | The results here show that the proposed approach improves the dependency parsing performance, particularly for long dependency relationships . |
Introduction | The results show that web-derived selectional preference can improve the statistical dependency parsing, particularly for long dependency relationships . |
Related Work | Our research, however, applies the web-scale data (Google hits and Google V1) to model the word-to-word dependency relationships rather than compound bracketing disambiguation. |
Related Work | Our approach, however, extends these techniques to dependency parsing, particularly for long dependency relationships , which involves more challenging tasks than the previous work. |
Beyond lexical CLTE | builds on two additional feature sets, derived from i) semantic phrase tables, and ii) dependency relations . |
Beyond lexical CLTE | Dependency Relation (DR) matching targets the increase of CLTE precision. |
Beyond lexical CLTE | We define a dependency relation as a triple that connects pairs of words through a grammatical relation. |
Experiments and results | Dependency relations (DR) have been extracted running the Stanford parser (Rafferty and Manning, 2008; De Marneffe et al., 2006). |
Experiments and results | Dependency relations (DR) have been extracted parsing English texts and Spanish hypotheses with DepPattern (Gamallo and Gonzalez, 2011). |
Using the Framework | We first use a dependency parser (de Mameffe et al., 2006) to parse each sentence and extract the set of dependency relations associated with the sentence. |
Using the Framework | For example, the sentence “I adore tennis” is represented by the dependency relations (nsubj: adore, I) and (dobj: adore, tennis). |
Using the Framework | Each sentence represents a single node u in the graph (unless otherwise specified) and is comprised of a set of dependency relations (or ngrams) present in the sentence. |
Abstract | And we also propose an effective strategy for dependency projection, where the dependency relationships of the word pairs in the source language are projected to the word pairs of the target language, leading to a set of classification instances rather than a complete tree. |
Introduction | Given a word-aligned bilingual corpus with source language sentences parsed, the dependency relationships of the word pairs in the source language are projected to the word pairs of the target language. |
Introduction | A dependency relationship is a boolean value that represents whether this word pair forms a dependency edge. |
Projected Classification Instance | We define a boolean-valued function 6 (y, i, j, 7“) to investigate the dependency relationship of word 2' and word j in parse tree y: |
Word-Pair Classification Model | Here we give the calculation of dependency probability C (7', j We use w to denote the parameter vector of the ME model, and f (7', j, 7“) to denote the feature vector for the assumption that the word pair 7' and j has a dependency relationship 7“. |
Introduction | We go one step further, however, in that we employ syntactically enriched vector models as the basic meaning representations, assuming a vector space spanned by combinations of dependency relations and words (Lin, 1998). |
The model | Figure 1 shows the co-occurrence graph of a small sample corpus of dependency trees: Words are represented as nodes in the graph, possible dependency relations between them are drawn as labeled edges, with weights corresponding to the observed frequencies. |
The model | Such a path is characterized by two dependency relations and two words, i.e., a quadruple (r,w’,r’,w” whose weight is the product of the weights of the two edges used in the path. |
The model | To avoid overly sparse vectors we generalize over the “middle word” w’ and build our second-order vectors on the dimensions corresponding to triples (r, r’ ,w” ) of two dependency relations and one word at the end of the two-step path. |
Integration of Syntactic and Lexical Information | Dependency relation (DR): Our way to overcome data sparsity is to break lexicalized frames into lexicalized slots (a.k.a. |
Integration of Syntactic and Lexical Information | dependency relations ). |
Integration of Syntactic and Lexical Information | Dependency relations contain both syntactic and lexical information (4). |
Abstract | We propose using large-scale clustering of dependency relations between verbs and multi-word nouns (MN 5) to construct a gazetteer for named entity recognition (N ER). |
Abstract | Since dependency relations capture the semantics of MN 5 well, the MN clusters constructed by using dependency relations should serve as a good gazetteer. |
Gazetteer Induction 2.1 Induction by MN Clustering | 2.2 EM-based Clustering using Dependency Relations |
Introduction | g of Dependency Relations |
Related Work and Discussion | By paralleliz-ing the clustering algorithm, we successfully constructed a cluster gazetteer with up to 500,000 entries from a large amount of dependency relations in Web documents. |
Context and Answer Detection | However, they cannot capture the dependency relationship between sentences. |
Context and Answer Detection | To label 810, we need consider the dependency relation between Q2 and Q3. |
Context and Answer Detection | The labels of the same sentence for two contiguous questions in a thread would be conditioned on the dependency relationship between the questions. |
Introduction | One is the dependency relationship between contexts and answers, which should be leveraged especially when questions alone do not provide sufficient information to find answers; the other is the dependency between answer candidates (similar to sentence dependency described above). |
Introduction | is Ragheb and Dickinson (2013), who use MASI (Passonneau, 2006) to measure agreement on dependency relations and head selection in multi-headed dependency syntax, and Bhat and Sharma (2012), who compute Cohen’s H (Cohen, 1960) on dependency relations in single-headed dependency syntax. |
Synthetic experiments | dency parsing: the percentage of tokens that receive the correct head and dependency relation . |
The metric | When comparing syntactic trees, we only want to compare dependency relations or nonterminal categories. |
The metric | Therefore we remove the leaf nodes in the case of phrase structure trees, and in the case of dependency trees we compare trees whose edges are unlabelled and nodes are labelled with the dependency relation between that word and its head; the root node receives the label 6. |
Architecture of BRAINSUP | Candidate fillers for each empty position (slot) in the patterns are chosen according to the lexical and syntactic constraints enforced by the dependency relations in the patterns. |
Architecture of BRAINSUP | To achieve that, we analyze a large corpus of parsed sentences L3 and store counts of observed head-relation-modifier ((h, r, 771)) dependency relations . |
Architecture of BRAINSUP | a dependency relation . |
Experiments | 14MAll’s features are similar to part-of-speech tags and untyped dependency relations . |
Our framework | Although regular “untyped” dependency relations do not seem to help our QSD system in the presence of phrase-structure trees, we found the col- |
Our framework | 0 Type of incoming dependency relation of each noun 0 Syntactic category of the deepest common ancestor o Lexical item of the deepest common ancestor 0 Length of the undirected path between the two |
Related work | dependency relations ) are used on both the train and the test data. |
Experiments | Dependency relations are used as context profiles as in Kazama and Torisawa (2008) and Kazama et al. |
Experiments | For example, we extract a dependency relation (7% V,( E ’3 , 75: from the sentence below, where a postposition “75: (wo)” is used to mark the verb object. |
Experiments | (2009) proposed using the J ensen-Shannon divergence between hidden class distributions, p(c|w1) and p(c|w2), which are obtained by using an EM-based clustering of dependency relations with a model p(wi,fk) = Zcp(wilc)p(fklc)p(c) (Kazama and Torisawa, 2008). |
Introduction | Each dimension of the vector corresponds to a context, fk, which is typically a neighboring word or a word having dependency relations with 212,- in a corpus. |
Analysis of reference compressions | The solid arrows indicate dependency relations between words2. |
Analysis of reference compressions | We observe that the dependency relations are changed by compression; humans create compound nouns using the components derived from different portions of the original sentence without regard to syntactic constraints. |
Analysis of reference compressions | 2Generally, a dependency relation is defined between bun-setsu. |
Introduction | Figure 1: An example of the dependency relation variant. |
Gaining Dependency Structures | Dependency Relation |
Gaining Dependency Structures | Table 1: Mapping from HPS G’s PAS types to dependency relations . |
Gaining Dependency Structures | heads and (2) appending dependency relations for those words/punctuation that do not have any head. |
Bilingual Projection of Dependency Grammar | Take Figure 2 as an example, following the Direct Projection Algorithm (DPA) (Hwa et a1., 2005) (Section 5), the dependency relationships between words can be directly projected from the source |
Related work | The dependency projection method DPA (Hwa et a1., 2005) based on Direct Correspondence Assumption (Hwa et a1., 2002) can be described as: if there is a pair of source words with a dependency relationship, the corresponding aligned words in target sentence can be considered as having the same dependency relationship equivalent-1y (e.g. |
Unsupervised Dependency Grammar Induction | denotes the word pair dependency relationship (e;- —> 63-). |
Experiment and Results | Interestingly, the underspecified dependency relation DEP which is typically used in cases for which no obvious syntactic dependency comes to mind shows a suspicion rate of 0.61 (595F/371P). |
Mining Dependency Trees | First, dependency trees are converted to Breadth—First Canonical Form whereby lexicographic order can apply to the word forms labelling tree nodes, to their part of speech, to their dependency relation or to any combination thereof3. |
Mining Dependency Trees | 3For convenience, the dependency relation labelling the edges of dependency trees is brought down to the daughter node of the edge. |
Introduction | In recent years, syntactic representations based on head-modifier dependency relations between words have attracted a lot of interest (Kubler et al., 2009). |
Towards A Universal Treebank | This mainly consisted in relabeling dependency relations and, due to the fine-grained label set used in the Swedish Treebank (Teleman, 1974), this could be done with high precision. |
Towards A Universal Treebank | Such a reduction may ultimately be necessary also in the case of dependency relations , but since most of our data sets were created through manual annotation, we could afford to retain a fine-grained analysis, knowing that it is always possible to map from finer to coarser distinctions, but not vice versa.4 |
Introduction | We also used the Kyoto University Text Corpus4 that provides dependency relations information for the same articles as the NAIST Text Corpus. |
Introduction | To create a subject detection model for Italian, we used the TUT corpus9 (Bosco et al., 2010), which contains manually annotated dependency relations and their labels, consisting of 80,878 tokens in CoNLL format. |
Introduction | We induced an maximum entropy classifier by using as items all arcs of dependency relations , each of which is used as a positive instance if its label is subject; otherwise it is used as a negative instance. |
Experimental Setup and Results | The sentence representations in the middle part of Figure 2 show these sentences with some of the dependency relations (relevant to our transformations) extracted by the parser, explicitly marked as labeled links. |
Syntax-to-Morphology Mapping | When we tag and syntactically analyze the En-lish side into dependency relations , and morpho-)gically analyze and disambiguate the Turkish hrase, we get the representation in the middle of igure 1, where we have co-indexed components at should map to each other, and some of the vntactic relations that the function words are in-olved in are marked with dependency links.1 |
Syntax-to-Morphology Mapping | Here <x>, <Y> and <z> can be considered as Prolog like-variables that bind to patterns (mostly root words), and the conditions check for specified dependency relations (e.g.,PMOD) between the left and the right sides. |
Surface Realization | dependency relation of indicator and argument |
Surface Realization | constituent tag of indsrc with constituent tag of indta'" constituent tag of argsm with constituent tag of argt‘" transformation of indsrc/argsm combined with constituent tag dependency relation of indsm and argsrc |
Surface Realization | dependency relation of indta’" and argt‘" |
Constraints for Prior Knowledge | 4.3 Dependency Relation Constraints |
Constraints for Prior Knowledge | Another set of constraints involves dependency relations , including subject—verb relation and determiner—noun relation. |
Experiments | In Section 4, three sets of constraints are introduced: modification count (MC), article-noun agreement (ANA), and dependency relation (DR) constraints. |
Dependency Parsing with HPSG | The dependency backbone extraction works by first identifying the head daughter for each binary grammar rule, and then propagating the head word of the head daughter upwards to their parents, and finally creating a dependency relation , labeled with the HP SG rule name of the parent node, from the head word of the parent to the head word of the non-head daughter. |
Dependency Parsing with HPSG | Therefore, we extend this feature set by adding four more feature categories, which are similar to the original ones, but the dependency relation was replaced by the dependency backbone of the HP S G outputs. |
Experiment Results & Error Analyses | unlabeled dependency relation ), fine-grained HP SG features do help the parser to deal with colloquial sentences, such as “What’s wrong with you?”. |
Our Approach | The dependency tree indicates the dependency relations between words. |
Our Approach | The dependency relation types are remained to guide the sentiment propagations in our model. |
Our Approach | Notably, the conversion is performed recursively for the connected words and the dependency relation types are remained. |
Related Work | Also, syntactic features such as the dependency relationship of words and subtrees have been shown to effectively improve the performances of sentiment analysis (Kudo and Matsumoto, 2004; Gamon, 2004; Matsumoto et al., 2005; Ng et al., 2006). |
Term Weighting and Sentiment Analysis | Another method is to use proximal information of the query and the word, using syntactic structure such as dependency relations of words that provide the graphical representation of the text (Mullen and Collier, 2004). |
Term Weighting and Sentiment Analysis | Note that proximal features using co-occurrence and dependency relationships were used in previous work. |
Graph Features | o the dependency relation nsubj (what, name) and prep_of(name, brother) indicates that the question seeks the information of a name;4 |
Graph Features | 0 the dependency relation prep_of(name, brother) indicates that the name is about a brother (but we do not know whether it is a person name yet); |
Graph Features | 0 the dependency relation nn(br0ther, bieber) and the facts that, (i) Bieber is a person and (ii) a person’s brother should also be a person, indicate that the name is about a person. |
Phenomena and Requirements | The edges eXpress dependency relations between nodes. |
Phenomena and Requirements | The predicative complement is a nonobligatory free modification (adjunct) which has a dual semantic dependency relation . |
Phenomena and Requirements | These two dependency relations are represented by different means (t-manual, page 376): |