Abstract | We propose a computationally efficient graph-based approach for local coherence modeling. |
Experiments | We evaluate the ability of our graph-based model to estimate the local coherence of a textual document with three different experiments. |
Experiments | 3Our graph-based model obtains for the discrimination task an accuracy of 0.846 and 0.635 on the ACCIDENTS and EARTHQUAKES datasets, respectively, compared to 0.904 and 0.872 as reported by Barzilay and Lapata (2008). |
Experiments | Table 3: Discrimination, reproduced baselines (B&L: Barzilay and Lapata (2008); E&C Elsner and Charniak (2011)) vs. graph-based |
Introduction | Similar to the application of graph-based methods in other areas of NLP (e.g. |
Introduction | work on word sense disambiguation by Navigli and Lapata (2010); for an overview over graph-based methods in NLP see Mihalcea and Radev (2011)) we model local coherence by relying only on centrality measures applied to the nodes in the graph. |
Introduction | We apply our graph-based model to the three tasks handled by Barzilay and Lapata (2008) to show that it provides the same flexibility over disparate tasks as the entity grid model: sentence ordering (Section 4.1), summary coherence ranking (Section 4.2), and readability assessment (Section 4.3). |
Method | In contrast to Barzilay and Lapata’s entity grid that contains information about absent entities, our graph-based representation only contains “positive” information. |
Method | From this graph-based representation, the local coherence of a text T can be measured by computing the average outdegree of a projection graph P. This centrality measure was chosen for two main reasons. |
Abstract | This paper introduces a graph-based semi-supervised joint model of Chinese word segmentation and part-of-speech tagging. |
Abstract | The proposed approach is based on a graph-based label propagation technique. |
Background | 3.2 Graph-based Label Propagation |
Background | Graph-based label propagation, a critical subclass of semi-supervised learning (SSL), has been widely used and shown to outperform other SSL methods (Chapelle et al., 2006). |
Background | Typically, graph-based label propagation algorithms are run in two main steps: graph construction and label propagation. |
Introduction | This study focuses on using a graph-based label propagation method to build a semi-supervised joint S&T model. |
Introduction | Graph-based label propagation methods have recently shown they can outperform the state-of-the-art in several natural language processing (NLP) tasks, e.g., POS tagging (Subramanya et al., 2010), knowledge acquisition (Talukdar et al., 2008), shallow semantic parsing for unknown predicate (Das and Smith, 2011). |
Introduction | Motivated by the works in (Subramanya et al., 2010; Das and Smith, 2011), for structured problems, graph-based label propagation can be employed to infer valuable syntactic information (n-gram-level label distributions) from labeled data to unlabeled data. |
Method | The proposed approach employs a transductive graph-based label propagation method to acquire such gainful information, i.e., label distributions from a similarity graph constructed over labeled and unlabeled data. |
Related Work | (2010) proposed a graph-based self-train style semi-supervised CRFs algorithm. |
Experiments | * SSL-Graph: A SSL model presented in (Subramanya et al., 2010) that uses graph-based leam-ing as posterior tag smoother for CRF model using Eq. |
Experiments | For graph-based learning, we implemented the algorithm presented in (Subramanya et al., 2010) and used the same hyper-parameters and features. |
Related Work and Motivation | Recent adaptation methods for SSL use: expectation minimization (Daumé-III, 2010) graph-based learning (Chapelle et al., 2006; Zhu, 2005), etc. |
Related Work and Motivation | In (Subramanya et al., 2010) an efficient iterative SSL method is described for syntactic tagging, using graph-based learning to smooth POS tag posteriors. |
Semi-Supervised Semantic Labeling | The unlabeled POS tag posteriors are then smoothed using a graph-based learning algorithm. |
Semi-Supervised Semantic Labeling | Graph-based SSL defines a new CRF objective function: |
Semi-Supervised Semantic Labeling | smoothing model, instead of a graph-based model, as follows: |
Experiments & Results 4.1 Experimental Setup | For evaluating our baseline as well as graph-based approaches, we use both intrinsic and extrinsic evaluations. |
Experiments & Results 4.1 Experimental Setup | 4.3.1 Graph-based Results |
Graph-based Lexicon Induction | Graph-based approaches can easily become com-putationally very expensive as the number of nodes grow. |
Related work | (2010) used linguistic analysis in the form of graph-based models instead of a vector space. |
Related work | Graph-based semi-supervised methods have been shown to be useful for domain adaptation in MT as well. |
Related work | Alexandrescu and Kirchhoff (2009) applied a graph-based method to determine similarities between sentences and use these similarities to promote similar translations for similar sentences. |
Introduction | In recent years, graph-based methods have attracted considerable attentions (Mihalcea, 2005; Navigli and Lapata, 2007; Agirre and Soroa, 2009). |
Introduction | On the graph structure of lexical knowledge base (LKB), random-walk or other well-known graph-based techniques have been applied to find mutually related senses among target words. |
Introduction | Unlike earlier studies disambiguating word-by-word, the graph-based methods obtain sense-interdependent solution for target words. |
Related Work | As described in Section 1, graph-based WSD has been extensively studied, since graphs are favorable structure to deal with interactions of data on vertices. |
Related Work | Our method can be viewed as one of graph-based methods, but it regards input-t0-class mapping as vertices, and the edges represent the relations both together in context and in sense. |
Related Work | Mihalcea (2005) proposed graph-based methods, whose vertices are sense label hypotheses on word sequence. |
A Multigraph Model | Work on graph-based models similar to ours report robustness with regard to the amount of training data used (Cai et al., 2011b; Cai et al., 2011a; Martschat et al., 2012). |
Conclusions and Future Work | We presented an unsupervised graph-based model for coreference resolution. |
Introduction | In this paper we present a graph-based approach for coreference resolution that models a document to be processed as a graph. |
Related Work | Graph-based coreference resolution. |
Related Work | While not developed within a graph-based framework, factor-based approaches for pronoun resolution (Mitkov, 1998) can be regarded as greedy clustering in a multigraph, where edges representing factors for pronoun resolution have negative or positive weight. |
Connotation Induction Algorithms | Limitations of Graph-based Algorithms |
Connotation Induction Algorithms | Although graph-based algorithms (§2.l, §2.2) provide an intuitive framework to incorporate various lexical relations, limitations include: |
Connotation Induction Algorithms | Addressing limitations of graph-based algorithms (§2.2), we propose an induction algorithm based on Integer Linear Programming (ILP). |
Experimental Result I | The [OVERLAY], which is based on both Pred-Arg and Arg-Arg subgraphs (§2.2), achieves the best performance among graph-based algorithms, significantly improving the precision over all other baselines. |
Experiments | The second block shows results from other kinds of parsing approaches (e.g., graph-based parsing, ensemble parsing, linear programming, dual decomposition). |
Experiments | Our parser gives a comparative accuracy to Koo and Collins (2010) that is a 3rd-order graph-based parsing approach. |
Experiments | Nivre and McDonald (2008) uses an ensemble model between transition-based and graph-based parsing approaches. |
Conclusions | We introduced a new general-purpose graph-based summarization framework that combines a submodular coverage function with a non-submodular dispersion function. |
Introduction | We propose a very general graph-based summarization framework that combines a submodular function with a non-submodular dispersion function. |
Related Work | Graph-based methods have been used for summarization (Ganesan et al., 2010), but in a different context—using paths in graphs to produce very short abstractive summaries. |
Experiments | In such situation, the graph-based ranking algorithm in the second component will be apt to be affected by the frequency information, so the final performance could not be sensitive to the performance of opinion relations iden- |
Opinion Target Extraction Methodology | To extract opinion targets from reviews, we adopt the framework proposed by (Liu et al., 2012), which is a graph-based extraction framework and |
Opinion Target Extraction Methodology | In the second component, we adopt a graph-based algorithm used in (Liu et al., 2012) to compute the confidence of each opinion target candidate, and the candidates with higher confidence than the threshold will be extracted as the opinion targets. |
Related Work | There were also many works employed graph-based method (Li et al., 2012; Zhang et al., 2010; Hassan and Radev, 2010; Liu et al., 2012), but none of previous works considered confidence of patterns in the graph. |
The First Stage: Sentiment Graph Walking Algorithm | In the first stage, we propose a graph-based algorithm called Sentiment Graph Walking to mine opinion words and opinion targets from reviews. |
The First Stage: Sentiment Graph Walking Algorithm | We can see that our graph-based methods (Ours-Bigraph and 0urs-Stage1 ) achieve higher recall than Zhang. |