Sentence Simplification for Semantic Role Labeling
Vickrey, David and Koller, Daphne

Article Structure

Abstract

Parse-tree paths are commonly used to incorporate information from syntactic parses into NLP systems.

Introduction

In semantic role labeling (SRL), given a sentence containing a target verb, we want to label the semantic arguments, or roles, of that verb.

Sentence Simplification

We will begin with an example before describing our model in detail.

Transformation Rules

A transformation rule takes as input a parse tree and produces as output a different, changed parse tree.

Simple Sentence Production

We now describe how to take a set of rules and produce a set of candidate simple sentences.

Labeling Simple Sentences

For a particular sentence/target verb pair 3, v, the output from the previous section is a set 85” = {if}; of valid simple sentences.

Probabilistic Model

We now define our probabilistic model.

Simplification Data Structure

Our representation of the set of possible simplifications of a sentence addresses two computational bottlenecks.

Experiments

We evaluated our system using the setup of the Conll 2005 semantic role labeling task.2 Thus, we trained on Sections 2-21 of PropBank and used Section 24 as development data.

Related Work

One area of current research which has similarities with this work is on Lexical Functional Grammars (LFGs).

Future Work

There are a number of improvements that could be made to the current simplification system, including augmenting the rule set to handle more constructions and doing further sentence normalizations, e. g., identifying whether a direct object exists.

Topics

role labeling

Appears in 9 sentences as: role labeling (8) role labelings (1)
In Sentence Simplification for Semantic Role Labeling
  1. We apply our simplification system to semantic role labeling (SRL).
    Page 1, “Abstract”
  2. In semantic role labeling (SRL), given a sentence containing a target verb, we want to label the semantic arguments, or roles, of that verb.
    Page 1, “Introduction”
  3. Current semantic role labeling systems rely primarily on syntactic features in order to identify and
    Page 1, “Introduction”
  4. Specifically, we train our model discriminatively to predict the correct role labeling assignment given an input sentence, treating the simplification as a hidden variable.
    Page 2, “Introduction”
  5. to 25$“, obtaining a set of possible role labelings .
    Page 4, “Labeling Simple Sentences”
  6. Also, for a sentence 3 there may be several simple labelings that lead to the same role labeling .
    Page 4, “Labeling Simple Sentences”
  7. This allows us to learn that “give” has a preference for the labeling {ARGO = Subject NP, ARGI = Postverb NP2, ARGZ = Postverb NP1 Our final features are analogous to those used in semantic role labeling , but greatly simplified due to our use of simple sentences: head word of the constituent; category (i.e., constituent label); and position in the simple sentence.
    Page 5, “Probabilistic Model”
  8. We evaluated our system using the setup of the Conll 2005 semantic role labeling task.2 Thus, we trained on Sections 2-21 of PropBank and used Section 24 as development data.
    Page 6, “Experiments”
  9. Another area of related work in the semantic role labeling literature is that on tree kernels (Moschitti, 2004; Zhang et al., 2007).
    Page 8, “Related Work”

See all papers in Proc. ACL 2008 that mention role labeling.

See all papers in Proc. ACL that mention role labeling.

Back to top.

semantic role

Appears in 6 sentences as: semantic role (6)
In Sentence Simplification for Semantic Role Labeling
  1. We apply our simplification system to semantic role labeling (SRL).
    Page 1, “Abstract”
  2. In semantic role labeling (SRL), given a sentence containing a target verb, we want to label the semantic arguments, or roles, of that verb.
    Page 1, “Introduction”
  3. Current semantic role labeling systems rely primarily on syntactic features in order to identify and
    Page 1, “Introduction”
  4. This allows us to learn that “give” has a preference for the labeling {ARGO = Subject NP, ARGI = Postverb NP2, ARGZ = Postverb NP1 Our final features are analogous to those used in semantic role labeling, but greatly simplified due to our use of simple sentences: head word of the constituent; category (i.e., constituent label); and position in the simple sentence.
    Page 5, “Probabilistic Model”
  5. We evaluated our system using the setup of the Conll 2005 semantic role labeling task.2 Thus, we trained on Sections 2-21 of PropBank and used Section 24 as development data.
    Page 6, “Experiments”
  6. Another area of related work in the semantic role labeling literature is that on tree kernels (Moschitti, 2004; Zhang et al., 2007).
    Page 8, “Related Work”

See all papers in Proc. ACL 2008 that mention semantic role.

See all papers in Proc. ACL that mention semantic role.

Back to top.

semantic role labeling

Appears in 6 sentences as: semantic role labeling (6)
In Sentence Simplification for Semantic Role Labeling
  1. We apply our simplification system to semantic role labeling (SRL).
    Page 1, “Abstract”
  2. In semantic role labeling (SRL), given a sentence containing a target verb, we want to label the semantic arguments, or roles, of that verb.
    Page 1, “Introduction”
  3. Current semantic role labeling systems rely primarily on syntactic features in order to identify and
    Page 1, “Introduction”
  4. This allows us to learn that “give” has a preference for the labeling {ARGO = Subject NP, ARGI = Postverb NP2, ARGZ = Postverb NP1 Our final features are analogous to those used in semantic role labeling , but greatly simplified due to our use of simple sentences: head word of the constituent; category (i.e., constituent label); and position in the simple sentence.
    Page 5, “Probabilistic Model”
  5. We evaluated our system using the setup of the Conll 2005 semantic role labeling task.2 Thus, we trained on Sections 2-21 of PropBank and used Section 24 as development data.
    Page 6, “Experiments”
  6. Another area of related work in the semantic role labeling literature is that on tree kernels (Moschitti, 2004; Zhang et al., 2007).
    Page 8, “Related Work”

See all papers in Proc. ACL 2008 that mention semantic role labeling.

See all papers in Proc. ACL that mention semantic role labeling.

Back to top.

Conll

Appears in 5 sentences as: Conll (5)
In Sentence Simplification for Semantic Role Labeling
  1. Applying our combined simplificatiorVSRL model to the Conll 2005 task, we show a significant improvement over a strong baseline model.
    Page 2, “Introduction”
  2. Our model outperforms all but the best few Conll 2005 systems, each of which uses multiple different automatically-generated parses (which would likely improve our model).
    Page 2, “Introduction”
  3. We evaluated our system using the setup of the Conll 2005 semantic role labeling task.2 Thus, we trained on Sections 2-21 of PropBank and used Section 24 as development data.
    Page 6, “Experiments”
  4. We used the Char-niak parses provided by the Conll distribution.
    Page 6, “Experiments”
  5. Our Transforms model takes as input the Char-niak parses supplied by the Conll release, and labels every node with Core arguments (ARGO—ARG5).
    Page 6, “Experiments”

See all papers in Proc. ACL 2008 that mention Conll.

See all papers in Proc. ACL that mention Conll.

Back to top.

parse tree

Appears in 4 sentences as: parse tree (5)
In Sentence Simplification for Semantic Role Labeling
  1. In the sentence “He expected to receive a prize for winning,” the path from “win” to its ARGO, “he”, involves the verbs “expect” and “receive” and the preposition “for.” The corresponding path through the parse tree likely occurs a relatively small number of times (or not at all) in the training corpus.
    Page 1, “Introduction”
  2. A transformation rule takes as input a parse tree and produces as output a different, changed parse tree .
    Page 2, “Transformation Rules”
  3. This procedure is quite expensive; we have to copy the entire parse tree at each step, and in general, this procedure could generate an exponential number of transformed parses.
    Page 4, “Simple Sentence Production”
  4. In our case, the AND nodes are similar to constituent nodes in a parse tree — each has a category (e.g.
    Page 6, “Simplification Data Structure”

See all papers in Proc. ACL 2008 that mention parse tree.

See all papers in Proc. ACL that mention parse tree.

Back to top.

Baseline system

Appears in 3 sentences as: Baseline system (2) baseline system (1)
In Sentence Simplification for Semantic Role Labeling
  1. For these arguments, we simply filled in using our baseline system (specifically, any non-core argument which did not overlap an argument predicted by our model was added to the labeling).
    Page 6, “Experiments”
  2. achieving a statistically significant increase over the Baseline system (according to confidence intervals calculated for the Conll-2005 results).
    Page 7, “Experiments”
  3. The Transforms model correctly labels the arguments of “buy”, while the Baseline system misses the ARGO.
    Page 7, “Experiments”

See all papers in Proc. ACL 2008 that mention Baseline system.

See all papers in Proc. ACL that mention Baseline system.

Back to top.