Index of papers in Proc. ACL 2014 that mention
  • maximum spanning
Flanigan, Jeffrey and Thomson, Sam and Carbonell, Jaime and Dyer, Chris and Smith, Noah A.
Abstract
The method is based on a novel algorithm for finding a maximum spanning , connected subgraph, embedded within a Lagrangian relaxation of an optimization problem that imposes linguistically inspired constraints.
Introduction
The core of JAMR is a two-part algorithm that first identifies concepts using a semi-Markov model and then identifies the relations that obtain between these by searching for the maximum spanning connected subgraph (MSCG) from an edge-labeled, directed graph representing all possible relations between the identified concepts.
Introduction
To solve the latter problem, we introduce an apparently novel O(|V|2 log algorithm that is similar to the maximum spanning tree (MST) algorithms that are widely used for dependency parsing (McDonald et al., 2005).
Relation Identification
MSCG finds a maximum spanning , connected subgraph of (V, E)
Relation Identification
We first show by induction that, at every iteration of MSCG, there exists some maximum spanning , connected subgraph that contains Ga) 2 (V, E07):
Relation Identification
output: maximum spanning , connected subgraph of (V, E) that preserves E(0)
maximum spanning is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Thadani, Kapil
Abstract
We recover approximate solutions to this problem using LP relaxation and maximum spanning tree algorithms, yielding techniques that can be combined with the efficient bigram-based inference approach using Lagrange multipliers.
Experiments
As discussed in §2.4, a maximum spanning tree is recovered from the output of the LP and greedily pruned in order to generate a valid integral solution while observing the imposed compression rate.
Introduction
Maximum spanning tree algorithms, commonly used in non-projective dependency parsing (McDonald et al., 2005), are not easily adaptable to this task since the maximum-weight subtree is not necessarily a part of the maximum spanning tree.
Introduction
In addition, we can recover approximate solutions to this problem by using the Chu-Liu Edmonds algorithm for recovering maximum spanning trees (Chu and Liu, 1965; Edmonds, 1967) over the relatively sparse subgraph defined by a solution to the relaxed LP.
Multi-Structure Sentence Compression
Figure 1: An example of the difficulty of recovering the maximum-weight subtree (B—>C, B—>D) from the maximum spanning tree (A—>C, C—>B, B—>D).
Multi-Structure Sentence Compression
Although the maximum spanning tree for a given token configuration can be recovered efficiently, Figure 1 illustrates that the maximum-scoring subtree is not necessarily found within it.
Multi-Structure Sentence Compression
It is straightforward to recover and score a bigram configuration y from a token configuration ,6 However, scoring solutions produced by the dynamic program from §2.3 also requires the score over a corresponding parse tree; this can be recovered by constructing a dependency subgraph containing across only the tokens that are active in a(y) and retrieving the maximum spanning tree for that subgraph using the Chu-Liu Edmonds algorithm.
maximum spanning is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Li, Sujian and Wang, Liang and Cao, Ziqiang and Li, Wenjie
Abstract
The state-of—the-art dependency parsing techniques, the Eisner algorithm and maximum spanning tree (MST) algorithm, are adopted to parse an optimal discourse dependency tree based on the arc-factored model and the large—margin learning techniques.
Discourse Dependency Parsing
3.3 Maximum Spanning Tree Algorithm
Discourse Dependency Parsing
Following the work of McDonald (2005b), we formalize discourse dependency parsing as searching for a maximum spanning tree (MST) in a directed graph.
Discourse Dependency Structure and Tree Bank
and maximum spanning tree (MST) algorithm are used respectively to parse the optimal projective and non-projective dependency trees with the large-margin learning technique (Crammer and Singer, 2003).
maximum spanning is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Björkelund, Anders and Kuhn, Jonas
Introducing Nonlocal Features
5 In case there are multiple maximum spanning trees, the best-first decoder will return one of them.
Introducing Nonlocal Features
With proper definitions, the proof can be constructed to show that both search algorithms return trees belonging to the set of maximum spanning trees over a graph.
Representation and Learning
Edmonds, 1967), which is a maximum spanning tree algorithm that finds the optimal tree over a connected directed graph.
maximum spanning is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: