A Discriminative Graph-Based Parser for the Abstract Meaning Representation
Flanigan, Jeffrey and Thomson, Sam and Carbonell, Jaime and Dyer, Chris and Smith, Noah A.

Article Structure

Abstract

Abstract Meaning Representation (AMR) is a semantic formalism for which a growing set of annotated examples is available.

Introduction

Semantic parsing is the problem of mapping natural language strings into meaning representations.

Notation and Overview

Our approach to AMR parsing represents an AMR parse as a graph G = (V, E); vertices and edges are given labels from sets LV and LE, respectively.

Concept Identification

The concept identification stage maps spans of words in the input sentence w to concept graph fragments from F, or to the empty graph fragment (D. These graph fragments often consist of just one labeled concept node, but in some cases they are larger graphs with multiple nodes and edges.3

Relation Identification

The relation identification stage adds edges among the concept subgraph fragments identified in the first stage (§3), creating a graph.

Automatic Alignments

In order to train the parser, we need alignments between sentences in the training data and their annotated AMR graphs.

Training

We now describe how to train the two stages of the parser.

Experiments

We evaluate our parser on the newswire section of LDC2013E117 (deft-amr-release-r3-proxy.txt).

Related Work

Our approach to relation identification is inspired by graph-based techniques for non-projective syntactic dependency parsing.

Conclusion

We have presented the first published system for automatic AMR parsing, and shown that it provides a strong baseline based on the Smatch evaluation metric.

Topics

maximum spanning

Appears in 9 sentences as: maximum spanning (9)
In A Discriminative Graph-Based Parser for the Abstract Meaning Representation
  1. The method is based on a novel algorithm for finding a maximum spanning , connected subgraph, embedded within a Lagrangian relaxation of an optimization problem that imposes linguistically inspired constraints.
    Page 1, “Abstract”
  2. The core of JAMR is a two-part algorithm that first identifies concepts using a semi-Markov model and then identifies the relations that obtain between these by searching for the maximum spanning connected subgraph (MSCG) from an edge-labeled, directed graph representing all possible relations between the identified concepts.
    Page 1, “Introduction”
  3. To solve the latter problem, we introduce an apparently novel O(|V|2 log algorithm that is similar to the maximum spanning tree (MST) algorithms that are widely used for dependency parsing (McDonald et al., 2005).
    Page 1, “Introduction”
  4. MSCG finds a maximum spanning , connected subgraph of (V, E)
    Page 4, “Relation Identification”
  5. We first show by induction that, at every iteration of MSCG, there exists some maximum spanning , connected subgraph that contains Ga) 2 (V, E07):
    Page 4, “Relation Identification”
  6. output: maximum spanning , connected subgraph of (V, E) that preserves E(0)
    Page 5, “Relation Identification”
  7. Induction step: By the inductive hypothesis, there exists some maximum spanning connected
    Page 5, “Relation Identification”
  8. Thus, (V, (EM U {6}) \ {e’}) is a maximum spanning connected subgraph that contains Ea“) , and the hypothesis still holds.
    Page 5, “Relation Identification”
  9. The maximum spanning connected subgraph M that contains it cannot have a higher score, because G contains every positive edge.
    Page 5, “Relation Identification”

See all papers in Proc. ACL 2014 that mention maximum spanning.

See all papers in Proc. ACL that mention maximum spanning.

Back to top.

Named Entity

Appears in 7 sentences as: named entities (2) Named Entity (3) named entity (3)
In A Discriminative Graph-Based Parser for the Abstract Meaning Representation
  1. Each stage is a discriminatively-trained linear structured predictor with rich features that make use of part-of-speech tagging, named entity tagging, and dependency parsing.
    Page 2, “Notation and Overview”
  2. o NER: 1 if the named entity tagger marked the span as an entity, 0 otherwise.
    Page 3, “Concept Identification”
  3. clex also has a set of rules for generating concept fragments for named entities and time expressions.
    Page 3, “Concept Identification”
  4. It generates a concept fragment for any entity recognized by the named entity tagger, as well as for any word sequence matching a regular expression for a time expression.
    Page 3, “Concept Identification”
  5. 0 Input: X, a sentence annotated with named entities (person, organization, location, mis-ciscellaneous) from the Illinois Named Entity Tagger (Ratinov and Roth, 2009), and part-of-speech tags and basic dependencies from the Stanford Parser (Klein and Manning, 2003; de Mameffe et al., 2006).
    Page 7, “Training”
  6. l. ( Named Entity ) Applies to name concepts and their opn children.
    Page 7, “Training”
  7. (Fuzzy Named Entity ) Applies to name concepts and their opn children.
    Page 7, “Training”

See all papers in Proc. ACL 2014 that mention Named Entity.

See all papers in Proc. ACL that mention Named Entity.

Back to top.

dependency parsing

Appears in 5 sentences as: dependency parser (1) dependency parsing (4)
In A Discriminative Graph-Based Parser for the Abstract Meaning Representation
  1. To solve the latter problem, we introduce an apparently novel O(|V|2 log algorithm that is similar to the maximum spanning tree (MST) algorithms that are widely used for dependency parsing (McDonald et al., 2005).
    Page 1, “Introduction”
  2. The relation identification stage (§4) is similar to a graph-based dependency parser .
    Page 2, “Notation and Overview”
  3. Each stage is a discriminatively-trained linear structured predictor with rich features that make use of part-of-speech tagging, named entity tagging, and dependency parsing .
    Page 2, “Notation and Overview”
  4. Our approach to relation identification is inspired by graph-based techniques for non-projective syntactic dependency parsing .
    Page 8, “Related Work”
  5. Minimum spanning tree algorithms—specifically, the optimum branching algorithm of Chu and Liu (1965) and Edmonds (1967)—were first used for dependency parsing by McDonald et a1.
    Page 8, “Related Work”

See all papers in Proc. ACL 2014 that mention dependency parsing.

See all papers in Proc. ACL that mention dependency parsing.

Back to top.

semantic parsing

Appears in 5 sentences as: semantic parser (1) semantic parsers (1) Semantic parsing (1) semantic parsing (2)
In A Discriminative Graph-Based Parser for the Abstract Meaning Representation
  1. Semantic parsing is the problem of mapping natural language strings into meaning representations.
    Page 1, “Introduction”
  2. 1To date, a graph transducer-based semantic parser has not been published, although the Bolinas toolkit (http://WWW.isi.eolu/publications/ licensed— sw/bolinas /) contains much of the necessary infrastructure.
    Page 1, “Introduction”
  3. comparison of these two approaches is beyond the scope of this paper, we emphasize that—as has been observed with dependency parsing—a diversity of approaches can shed light on complex problems such as semantic parsing .
    Page 2, “Introduction”
  4. While all semantic parsers aim to transform natural language text to a formal representation of its meaning, there is wide variation in the meaning representations and parsing techniques used.
    Page 9, “Related Work”
  5. In contrast, semantic dependency parsing—in which the vertices in the graph correspond to the words in the sentence—is meant to make semantic parsing feasible for broader textual domains.
    Page 9, “Related Work”

See all papers in Proc. ACL 2014 that mention semantic parsing.

See all papers in Proc. ACL that mention semantic parsing.

Back to top.

Meaning Representation

Appears in 4 sentences as: Meaning Representation (2) meaning representations (2)
In A Discriminative Graph-Based Parser for the Abstract Meaning Representation
  1. Abstract Meaning Representation (AMR) is a semantic formalism for which a growing set of annotated examples is available.
    Page 1, “Abstract”
  2. Semantic parsing is the problem of mapping natural language strings into meaning representations .
    Page 1, “Introduction”
  3. Abstract Meaning Representation (AMR) (Banarescu et al., 2013; Dorr et al., 1998) is a semantic formalism in which the meaning of a sentence is encoded as a rooted, directed, acyclic graph.
    Page 1, “Introduction”
  4. While all semantic parsers aim to transform natural language text to a formal representation of its meaning, there is wide variation in the meaning representations and parsing techniques used.
    Page 9, “Related Work”

See all papers in Proc. ACL 2014 that mention Meaning Representation.

See all papers in Proc. ACL that mention Meaning Representation.

Back to top.

natural language

Appears in 4 sentences as: Natural language (1) natural language (3)
In A Discriminative Graph-Based Parser for the Abstract Meaning Representation
  1. Semantic parsing is the problem of mapping natural language strings into meaning representations.
    Page 1, “Introduction”
  2. Although it does not encode quantifiers, tense, or modality, the set of semantic phenomena included in AMR were selected with natural language applications—in particular, machine translation—in mind.
    Page 1, “Introduction”
  3. While all semantic parsers aim to transform natural language text to a formal representation of its meaning, there is wide variation in the meaning representations and parsing techniques used.
    Page 9, “Related Work”
  4. Natural language interfaces for querying databases have served as another driving application (Zelle and Mooney, 1996; Kate et al., 2005; Liang et al., 2011, inter alia).
    Page 9, “Related Work”

See all papers in Proc. ACL 2014 that mention natural language.

See all papers in Proc. ACL that mention natural language.

Back to top.

spanning tree

Appears in 4 sentences as: spanning tree (4)
In A Discriminative Graph-Based Parser for the Abstract Meaning Representation
  1. To solve the latter problem, we introduce an apparently novel O(|V|2 log algorithm that is similar to the maximum spanning tree (MST) algorithms that are widely used for dependency parsing (McDonald et al., 2005).
    Page 1, “Introduction”
  2. The approach can be understood as an alternative to parsing approaches using graph transducers such as (synchronous) hyperedge replacement grammars (Chiang et al., 2013; Jones et al., 2012; Drewes et al., 1997), in much the same way that spanning tree algorithms are an alternative to using shift-reduce and dynamic programming algorithms for dependency parsing.1 While a detailed
    Page 1, “Introduction”
  3. This algorithm is a (to our knowledge novel) modification of the minimum spanning tree algorithm of Kruskal (1956).
    Page 4, “Relation Identification”
  4. Minimum spanning tree algorithms—specifically, the optimum branching algorithm of Chu and Liu (1965) and Edmonds (1967)—were first used for dependency parsing by McDonald et a1.
    Page 8, “Related Work”

See all papers in Proc. ACL 2014 that mention spanning tree.

See all papers in Proc. ACL that mention spanning tree.

Back to top.

dynamic programming

Appears in 3 sentences as: dynamic programming (3)
In A Discriminative Graph-Based Parser for the Abstract Meaning Representation
  1. The approach can be understood as an alternative to parsing approaches using graph transducers such as (synchronous) hyperedge replacement grammars (Chiang et al., 2013; Jones et al., 2012; Drewes et al., 1997), in much the same way that spanning tree algorithms are an alternative to using shift-reduce and dynamic programming algorithms for dependency parsing.1 While a detailed
    Page 1, “Introduction”
  2. Our approach finds the highest-scoring b and c using a dynamic programming algorithm: the zeroth-order case of inference under a semi-Markov model (Janssen and Limnios, 1999).
    Page 3, “Concept Identification”
  3. chart-based dynamic programming for inference.
    Page 9, “Related Work”

See all papers in Proc. ACL 2014 that mention dynamic programming.

See all papers in Proc. ACL that mention dynamic programming.

Back to top.

objective function

Appears in 3 sentences as: objective function (2) objective function: (1)
In A Discriminative Graph-Based Parser for the Abstract Meaning Representation
  1. The score of graph G (encoded as 2) can be written as the objective function quz, where gbe = ¢Tg(e).
    Page 6, “Relation Identification”
  2. To handle the constraint Az g b, we introduce multipliers p 2 0 to get the Lagrangian relaxation of the objective function:
    Page 6, “Relation Identification”
  3. L(z) is an upper bound on the unrelaxed objective function quz, and is equal to it if and only if the constraints AZ g b are satisfied.
    Page 6, “Relation Identification”

See all papers in Proc. ACL 2014 that mention objective function.

See all papers in Proc. ACL that mention objective function.

Back to top.

optimization problem

Appears in 3 sentences as: optimization problem (3)
In A Discriminative Graph-Based Parser for the Abstract Meaning Representation
  1. The method is based on a novel algorithm for finding a maximum spanning, connected subgraph, embedded within a Lagrangian relaxation of an optimization problem that imposes linguistically inspired constraints.
    Page 1, “Abstract”
  2. We frame the task as a constrained combinatorial optimization problem .
    Page 3, “Relation Identification”
  3. tensions allow for higher-order (non—edge—local) features, often making use of relaxations to solve the NP-hard optimization problem .
    Page 9, “Related Work”

See all papers in Proc. ACL 2014 that mention optimization problem.

See all papers in Proc. ACL that mention optimization problem.

Back to top.

WordNet

Appears in 3 sentences as: WordNet (3)
In A Discriminative Graph-Based Parser for the Abstract Meaning Representation
  1. We use WordNet to generate candidate lemmas, and we also use a fuzzy match of a concept, defined to be a word in the sentence that has the longest string prefix match with that concept’s label, if the match length is Z 4.
    Page 7, “Automatic Alignments”
  2. WordNet lemmas and fuzzy matches are only used if the rule explicitly uses them.
    Page 7, “Automatic Alignments”
  3. Strips off trailing ‘-[0-9]+’ from the concept (for example run—01 —> run), and matches any exact matching word or WordNet lemma.
    Page 7, “Training”

See all papers in Proc. ACL 2014 that mention WordNet.

See all papers in Proc. ACL that mention WordNet.

Back to top.