Index of papers in Proc. ACL that mention
  • maximum spanning
Kolomiyets, Oleksandr and Bethard, Steven and Moens, Marie-Francine
Abstract
We compare two parsing models for temporal dependency structures, and show that a deterministic non-projective dependency parser outperforms a graph-based maximum spanning tree parser, achieving labeled attachment accuracy of 0.647 and labeled tree edit distance of 0.596.
Discussion and Conclusions
Comparing the two dependency parsing models, we have found that a shift-reduce parser, which more closely mirrors the incremental processing of our human annotators, outperforms a graph-based maximum spanning tree parser.
Evaluations
Table 2: Features for the shift-reduce parser (SRP) and the graph-based maximum spanning tree (MST) parser.
Evaluations
The Shift-Reduce parser (SRP; Section 4.1) and the graph-based, maximum spanning tree parser (MST; Section 4.2) are compared to these baselines.
Evaluations
In terms of labeled attachment score, both dependency parsing models outperformed the baseline models — the maximum spanning tree parser achieved 0.614 LAS, and the shift-reduce parser achieved 0.647 LAS.
Feature Design
The shift-reduce parser (SRP) trains a machine learning classifier as the oracle 0 E (C —> T) to predict a transition 75 from a parser configuration 0 2 (L1, L2, Q, E), using node features such as the heads of L1, L2 and Q, and edge features from the already predicted temporal relations in E. The graph-based maximum spanning tree (MST) parser trains a machine learning model to predict SCORE(e) for an edge e = (107;, rj, wk), using features of the nodes w, and wk.
Parsing Models
The SPANNINGTREE function is usually defined using one of the efficient search techniques for finding a maximum spanning tree.
Parsing Models
The result is the globally optimal maximum spanning tree for the graph (Georgiadis, 2003).
maximum spanning is mentioned in 11 sentences in this paper.
Topics mentioned in this paper:
Flanigan, Jeffrey and Thomson, Sam and Carbonell, Jaime and Dyer, Chris and Smith, Noah A.
Abstract
The method is based on a novel algorithm for finding a maximum spanning , connected subgraph, embedded within a Lagrangian relaxation of an optimization problem that imposes linguistically inspired constraints.
Introduction
The core of JAMR is a two-part algorithm that first identifies concepts using a semi-Markov model and then identifies the relations that obtain between these by searching for the maximum spanning connected subgraph (MSCG) from an edge-labeled, directed graph representing all possible relations between the identified concepts.
Introduction
To solve the latter problem, we introduce an apparently novel O(|V|2 log algorithm that is similar to the maximum spanning tree (MST) algorithms that are widely used for dependency parsing (McDonald et al., 2005).
Relation Identification
MSCG finds a maximum spanning , connected subgraph of (V, E)
Relation Identification
We first show by induction that, at every iteration of MSCG, there exists some maximum spanning , connected subgraph that contains Ga) 2 (V, E07):
Relation Identification
output: maximum spanning , connected subgraph of (V, E) that preserves E(0)
maximum spanning is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Thadani, Kapil
Abstract
We recover approximate solutions to this problem using LP relaxation and maximum spanning tree algorithms, yielding techniques that can be combined with the efficient bigram-based inference approach using Lagrange multipliers.
Experiments
As discussed in §2.4, a maximum spanning tree is recovered from the output of the LP and greedily pruned in order to generate a valid integral solution while observing the imposed compression rate.
Introduction
Maximum spanning tree algorithms, commonly used in non-projective dependency parsing (McDonald et al., 2005), are not easily adaptable to this task since the maximum-weight subtree is not necessarily a part of the maximum spanning tree.
Introduction
In addition, we can recover approximate solutions to this problem by using the Chu-Liu Edmonds algorithm for recovering maximum spanning trees (Chu and Liu, 1965; Edmonds, 1967) over the relatively sparse subgraph defined by a solution to the relaxed LP.
Multi-Structure Sentence Compression
Figure 1: An example of the difficulty of recovering the maximum-weight subtree (B—>C, B—>D) from the maximum spanning tree (A—>C, C—>B, B—>D).
Multi-Structure Sentence Compression
Although the maximum spanning tree for a given token configuration can be recovered efficiently, Figure 1 illustrates that the maximum-scoring subtree is not necessarily found within it.
Multi-Structure Sentence Compression
It is straightforward to recover and score a bigram configuration y from a token configuration ,6 However, scoring solutions produced by the dynamic program from §2.3 also requires the score over a corresponding parse tree; this can be recovered by constructing a dependency subgraph containing across only the tokens that are active in a(y) and retrieving the maximum spanning tree for that subgraph using the Chu-Liu Edmonds algorithm.
maximum spanning is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Galley, Michel and Manning, Christopher D.
Abstract
(2005b) formalized dependency parsing as a maximum spanning tree (MST) problem, which can be solved in quadratic time relative to the length of the sentence.
Dependency parsing for machine translation
In this section, we review dependency parsing formulated as a maximum spanning tree problem (McDonald et al., 2005b), which can be solved in quadratic time, and then present its adaptation and novel application to phrase-based decoding.
Dependency parsing for machine translation
This optimization problem is a known instance of the maximum spanning tree (MST) problem.
Dependency parsing for machine translation
Once all loops have been eliminated, the algorithm maps back the maximum spanning tree of the contracted graph onto the original graph G, and it can be shown that this yields a spanning tree that is optimal with respect to G and s (Georgiadis, 2003).
maximum spanning is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Li, Sujian and Wang, Liang and Cao, Ziqiang and Li, Wenjie
Abstract
The state-of—the-art dependency parsing techniques, the Eisner algorithm and maximum spanning tree (MST) algorithm, are adopted to parse an optimal discourse dependency tree based on the arc-factored model and the large—margin learning techniques.
Discourse Dependency Parsing
3.3 Maximum Spanning Tree Algorithm
Discourse Dependency Parsing
Following the work of McDonald (2005b), we formalize discourse dependency parsing as searching for a maximum spanning tree (MST) in a directed graph.
Discourse Dependency Structure and Tree Bank
and maximum spanning tree (MST) algorithm are used respectively to parse the optimal projective and non-projective dependency trees with the large-margin learning technique (Crammer and Singer, 2003).
maximum spanning is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Jiang, Wenbin and Liu, Qun
Abstract
More importantly, when this classifier is integrated into a maximum spanning tree (MST) dependency parser, obvious improvement is obtained over the MST baseline.
Introduction
More importantly, when this classifier is integrated into a 2nd-ordered maXimum spanning tree (MST) dependency parser (McDonald and Pereira, 2006) in a weighted average manner, significant improvement is obtained over the MST baselines.
Related Works
Similar to the graph-based method, our model is factored on dependency edges, and its decoding procedure also aims to find a maximum spanning tree in a fully connected directed graph.
maximum spanning is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Chen, Wenliang and Zhang, Min and Li, Haizhou
Parsing with dependency language model
The graph-based parsing model aims to search for the maximum spanning tree (MST) in a graph (McDonald et al., 2005).
Parsing with dependency language model
The parsing model finds a maximum spanning tree (MST), which is the highest scoring tree in T(Gx).
Parsing with dependency language model
In our approach, we consider the scores of the DLM when searching for the maximum spanning tree.
maximum spanning is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Björkelund, Anders and Kuhn, Jonas
Introducing Nonlocal Features
5 In case there are multiple maximum spanning trees, the best-first decoder will return one of them.
Introducing Nonlocal Features
With proper definitions, the proof can be constructed to show that both search algorithms return trees belonging to the set of maximum spanning trees over a graph.
Representation and Learning
Edmonds, 1967), which is a maximum spanning tree algorithm that finds the optimal tree over a connected directed graph.
maximum spanning is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: