Beyond NomBank: A Study of Implicit Arguments for Nominal Predicates
Gerber, Matthew and Chai, Joyce

Article Structure

Abstract

Despite its substantial coverage, NomBank does not account for all within-sentence arguments and ignores extra-sentential arguments altogether.

Introduction

Verbal and nominal semantic role labeling (SRL) have been studied independently of each other (Carreras and Marquez, 2005; Gerber et al., 2009) as well as jointly (Surdeanu et al., 2008; Hajic et al., 2009).

Related work

Palmer et al.

Data annotation and analysis

3.1 Data annotation

Implicit argument identification

4.1 Model formulation

Evaluation

We trained the feature-based logistic regression model over 816 annotated predicate instances associated with 650 implicitly filled argument positions (not all predicate instances had implicit arguments).

Discussion

6.1 Feature ablation

Conclusions and future work

Current SRL approaches limit the search for arguments to the sentence containing the predicate of interest.

Topics

coreference

Appears in 13 sentences as: coreference (11) coreferent (3) coreferential (1)
In Beyond NomBank: A Study of Implicit Arguments for Nominal Predicates
  1. (2005) suggested approaches to implicit argument identification based on observed coreference patterns; however, the authors did not implement and evaluate such methods.
    Page 2, “Related work”
  2. analysis of naturally occurring coreference patterns to aid implicit argument identification.
    Page 2, “Related work”
  3. A candidate constituent c will often form a coreference chain with other constituents in the discourse.
    Page 4, “Implicit argument identification”
  4. When determining whether 0 is the iargg of investment, one can draw evidence from other mentions in 0’s coreference chain.
    Page 4, “Implicit argument identification”
  5. Thus, the unit of classification for a candidate constituent c is the three-tuple (p, iargn, c’), where c’ is a coreference chain comprising 0 and its coreferent constituents.3 We defined a binary classification function Pr(+| (p,iargn,c’ that predicts the probability that the entity referred to by c fills the missing argument position iargn of predicate instance p. In the remainder of this paper, we will refer to c as the primary filler, differentiating it from other mentions in the coreference chain c’ .
    Page 4, “Implicit argument identification”
  6. 3We used OpenNLP for coreference identification: http://opennlp.sourceforgenet
    Page 4, “Implicit argument identification”
  7. We then identified coreferent pairs of arguments using OpenNLP.
    Page 5, “Implicit argument identification”
  8. Suppose the resulting data has N coreferential pairs of argument positions.
    Page 5, “Implicit argument identification”
  9. The numerator in Equation 6 is defined as Each term in the denominator is obtained similarly, except that M is computed as the total number of coreference pairs comprising an argument position (e.g., (p, argn)) and any other argument position.
    Page 5, “Implicit argument identification”
  10. First, the system identified coreferent mentions of Olivetti that participated in exporting and supplying events (not shown).
    Page 8, “Discussion”
  11. Although we consistently observed development gains from using automatic coreference resolution, this process creates errors that need to be studied more closely.
    Page 9, “Conclusions and future work”

See all papers in Proc. ACL 2010 that mention coreference.

See all papers in Proc. ACL that mention coreference.

Back to top.

TreeBank

Appears in 7 sentences as: TreeBank (7)
In Beyond NomBank: A Study of Implicit Arguments for Nominal Predicates
  1. However, as shown by the following example from the Penn TreeBank (Marcus et al., 1993), this restriction excludes extra-sentential arguments:
    Page 1, “Introduction”
  2. Implicit arguments have not been annotated within the Penn TreeBank , which is the textual and syntactic basis for NomBank.
    Page 2, “Data annotation and analysis”
  3. Thus, to facilitate our study, we annotated implicit arguments for instances of nominal predicates within the standard training, development, and testing sections of the TreeBank .
    Page 2, “Data annotation and analysis”
  4. Consider the following abridged sentences, which are adjacent in their Penn TreeBank document:
    Page 4, “Implicit argument identification”
  5. Starting with a wide range of features, we performed floating forward feature selection (Pudil et al., 1994) over held-out development data comprising implicit argument annotations from section 24 of the Penn TreeBank .
    Page 4, “Implicit argument identification”
  6. Throughout our study, we used gold-standard discourse relations provided by the Penn Discourse TreeBank (Prasad et al., 2008).
    Page 6, “Implicit argument identification”
  7. These predicates are among the most frequent in the TreeBank and are likely to require approaches that differ from the ones we pursued.
    Page 9, “Conclusions and future work”

See all papers in Proc. ACL 2010 that mention TreeBank.

See all papers in Proc. ACL that mention TreeBank.

Back to top.

semantic role

Appears in 5 sentences as: semantic role (3) semantic roles (3)
In Beyond NomBank: A Study of Implicit Arguments for Nominal Predicates
  1. Verbal and nominal semantic role labeling (SRL) have been studied independently of each other (Carreras and Marquez, 2005; Gerber et al., 2009) as well as jointly (Surdeanu et al., 2008; Hajic et al., 2009).
    Page 1, “Introduction”
  2. However, as noted by Iida et al., grammatical cases do not stand in a one-to-one relationship with semantic roles in Japanese (the same is true for English).
    Page 2, “Related work”
  3. Feature 1 models the semantic role relationship between each mention in c’ and the missing argument position iargn.
    Page 4, “Implicit argument identification”
  4. semantic roles using SemLink.5 For explanation purposes, consider again Example 1, where we are trying to fill the iargo of shipping.
    Page 5, “Implicit argument identification”
  5. As shown, we observed significant losses when excluding features that relate the semantic roles of mentions in c’ to the semantic role
    Page 7, “Discussion”

See all papers in Proc. ACL 2010 that mention semantic role.

See all papers in Proc. ACL that mention semantic role.

Back to top.

gold-standard

Appears in 4 sentences as: gold-standard (4)
In Beyond NomBank: A Study of Implicit Arguments for Nominal Predicates
  1. Throughout our study, we used gold-standard discourse relations provided by the Penn Discourse TreeBank (Prasad et al., 2008).
    Page 6, “Implicit argument identification”
  2. To factor out errors from standard SRL analyses, the model used gold-standard argument labels provided by PropBank and NomBank.
    Page 6, “Evaluation”
  3. We also evaluated an oracle model that made gold-standard predictions for candidates within the two-sentence prediction window.
    Page 6, “Evaluation”
  4. First, we have created gold-standard implicit argument annotations for a small set of pervasive nominal predicates.7 Our analysis shows that these annotations add 65% to the role coverage of NomBank.
    Page 9, “Conclusions and future work”

See all papers in Proc. ACL 2010 that mention gold-standard.

See all papers in Proc. ACL that mention gold-standard.

Back to top.

Penn TreeBank

Appears in 4 sentences as: Penn TreeBank (4)
In Beyond NomBank: A Study of Implicit Arguments for Nominal Predicates
  1. However, as shown by the following example from the Penn TreeBank (Marcus et al., 1993), this restriction excludes extra-sentential arguments:
    Page 1, “Introduction”
  2. Implicit arguments have not been annotated within the Penn TreeBank , which is the textual and syntactic basis for NomBank.
    Page 2, “Data annotation and analysis”
  3. Consider the following abridged sentences, which are adjacent in their Penn TreeBank document:
    Page 4, “Implicit argument identification”
  4. Starting with a wide range of features, we performed floating forward feature selection (Pudil et al., 1994) over held-out development data comprising implicit argument annotations from section 24 of the Penn TreeBank .
    Page 4, “Implicit argument identification”

See all papers in Proc. ACL 2010 that mention Penn TreeBank.

See all papers in Proc. ACL that mention Penn TreeBank.

Back to top.