Index of papers in Proc. ACL 2013 that mention
  • coreference
Durrett, Greg and Hall, David and Klein, Dan
Abstract
Efficiently incorporating entity-level information is a challenge for coreference resolution systems due to the difficulty of exact inference over partitions.
Abstract
We describe an end-to-end discriminative probabilistic model for coreference that, along with standard pairwise features, enforces structural agreement constraints between specified properties of coreferent mentions.
Example
One way is to exploit the correct coreference decision we have already made, they A referring to people, since people are not as likely to have a price as art items are.
Example
Because even these six mentions have hundreds of potential partitions into coreference chains, we cannot search over partitions exhaustively, and therefore we must design our model to be able to use this information while still admitting an efficient inference scheme.
Introduction
The inclusion of entity-level features has been a driving force behind the development of many coreference resolution systems (Luo et al., 2004; Rahman and Ng, 2009; Haghighi and Klein, 2010; Lee et al., 2011).
Introduction
However, such systems may be locked into bad coreference decisions and are difficult to directly optimize for standard evaluation metrics.
Introduction
structural agreement factors softly drive properties of coreferent mentions to agree with one another.
Models
,i—1,<neW>}; this variable specifies mention i’s selected antecedent or indicates that it begins a new coreference chain.
Models
Note that a set of coreference chains 0 (the final desired output) can be uniquely determined from a, but a is not uniquely determined by C.
Models
Figure 1: Our BASIC coreference model.
coreference is mentioned in 35 sentences in this paper.
Topics mentioned in this paper:
Guinaudeau, Camille and Strube, Michael
Experiments
We also propose to use a coreference resolution system and consider coreferent entities to be the same discourse entity.
Experiments
As the coreference resolution system is trained on well-formed textual documents and expects a correct sentence ordering, we use in all our experiments only features that do not rely on sentence order (e.g.
Experiments
Second, we want to evaluate the influence of automatically performed coreference resolution in a controlled fashion.
The Entity Grid Model
Finally, they include a heuristic coreference resolution component by linking mentions which share a
coreference is mentioned in 19 sentences in this paper.
Topics mentioned in this paper:
Lassalle, Emmanuel and Denis, Pascal
Abstract
This paper proposes a new method for significantly improving the performance of pairwise coreference models.
Abstract
In effect, our approach finds an optimal feature space (derived from a base feature set and indicator set) for discriminating coreferential mention pairs.
Introduction
Coreference resolution is the problem of partitioning a sequence of noun phrases (or mentions), as they occur in a natural language text, into a set of referential entities.
Introduction
A common approach to this problem is to separate it into two modules: on the one hand, one defines a model for evaluating coreference links, in general a discriminative classifier that detects coreferential mention pairs.
Introduction
In this kind of architecture, the performance of the entire coreference system strongly depends on the quality of the local pairwise classifier.1 Consequently, a lot of research effort on coreference resolution has focused on trying to boost the performance of the pairwise classifier.
Modeling pairs
Pairwise models basically employ one local classifier to decide whether two mentions are coreferential or not.
Modeling pairs
For instance, some coreference resolution systems process different kinds of anaphors separately, which suggests for example that pairs containing an anaphoric pronoun behave differently from pairs with non-
Modeling pairs
where Q classically represents randomness, X is the space of objects (“mention pairs”) that is not directly observable and yij(w) E 3/ = {+1, —1} are the labels indicating whether mi and 7m are coreferential or not.
System description
We tested 3 classical greedy link selection strategies that form clusters from the classifier decision: Closest-First (merge mentions with their closest coreferent mention on the left) (Soon et al., 2001),
coreference is mentioned in 13 sentences in this paper.
Topics mentioned in this paper:
Li, Peifeng and Zhu, Qiaoming and Zhou, Guodong
Abstract
To resolve such problem, this paper proposes a novel global argument inference model to explore specific relationships, such as Coreference , Sequence and Parallel, among relevant event mentions to recover those inter-sentence arguments in the sentence, discourse and document layers which represent the cohesion of an event or a topic.
Inferring Inter-Sentence Arguments on Relevant Event Mentions
In this paper, we divide the relations among relevant event mentions into three categories: Coreference , Sequence and Parallel.
Inferring Inter-Sentence Arguments on Relevant Event Mentions
An event may have more than one mention in a document and coreference event mentions refer to the same event, as same as the definition in the ACE evaluations.
Inferring Inter-Sentence Arguments on Relevant Event Mentions
Those coreference event mentions always have the same arguments and roles.
Introduction
extractor, it is really challenging to recognize these entities as the arguments of its corefered mention E3 since to reduce redundancy in a Chinese discourse, the later Chinese sentences omit many of these entities already mentioned in previous sentences.
coreference is mentioned in 12 sentences in this paper.
Topics mentioned in this paper:
Martschat, Sebastian
Abstract
We present an unsupervised model for coreference resolution that casts the problem as a clustering task in a directed labeled weighted multigraph.
Introduction
Coreference resolution is the task of determining which mentions in a text refer to the same entity.
Introduction
Quite recently, however, rule-based approaches regained popularity due to Stanford’s multi-pass sieve approach which exhibits state-of-the-art performance on many standard coreference data sets (Raghunathan et al., 2010) and also won the CoNLL-2011 shared task on coreference resolution (Lee et al., 2011; Pradhan et al., 2011).
Introduction
In this paper we present a graph-based approach for coreference resolution that models a document to be processed as a graph.
Related Work
Graph-based coreference resolution.
Related Work
Nicolae and Nicolae (2006) phrase coreference resolution as a graph clustering problem: they first perform pairwise classification and then construct a graph using the derived confidence values as edge weights.
Related Work
(2010) and Cai and Strube (2010) perform coreference resolution in one step using graph partitioning approaches.
coreference is mentioned in 30 sentences in this paper.
Topics mentioned in this paper:
Wolfe, Travis and Van Durme, Benjamin and Dredze, Mark and Andrews, Nicholas and Beller, Charley and Callison-Burch, Chris and DeYoung, Jay and Snyder, Justin and Weese, Jonathan and Xu, Tan and Yao, Xuchen
Evaluation
For richer annotations that include lemmatiza-tions, part of speech, NER, and in-doc coreference , we preprocessed each of the datasets using tools7 similar to those used to create the Annotated Gigaword corpus (Napoles et al., 2012).
Evaluation
Extended Event Coreference Bank Based on the dataset of Bejan and Harabagiu (2010), Lee et al.
Evaluation
(2012) introduced the Extended Event Coreference Bank (EECB) to evaluate cross-document event coreference .
Introduction
Similar to entity coreference resolution, almost all of this work assumes unanchored mentions: predicate argument tuples are grouped together based on coreferent events.
Introduction
The first work on event coreference dates back to Bagga and Baldwin (1999).
PARMA
Predicates are represented as mention spans and arguments are represented as coreference chains (sets of mention spans) provided by in-document coreference resolution systems such as included in the Stanford NLP toolkit.
PARMA
For argument coref chains we heuristically choose a canonical mention to represent each chain, and some features only look at this canonical mention.
PARMA
The canonical mention is chosen based on length,4 information about the head word,5 and position in the document.6 In most cases, coref chains that are longer than one are proper nouns and the canonical mention is the first and longest mention (outranking pronominal references and other name shortenings).
coreference is mentioned in 10 sentences in this paper.
Topics mentioned in this paper:
Laparra, Egoitz and Rigau, German
Conclusions and Future Work
For instance, our system can also profit from additional annotations like coreference , that has proved its utility in previous works.
Evaluation
For each missing argument, the gold-standard includes the whole coreference chain of the filler.
Evaluation
Therefore, the scorer selects from all coreferent mentions the highest Dice value.
ImpAr algorithm
Filling the implicit arguments of a predicate has been identified as a particular case of coreference , very close to pronoun resolution (Silberer and Frank, 2012).
Related Work
This work applied selectional restrictions together with coreference chains, in a very specific domain.
Related Work
These early works agree that the problem is, in fact, a special case of anaphora or coreference resolution.
Related Work
Silberer and Frank (2012) adapted an entity-based coreference resolution model to extend automatically the training corpus.
coreference is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Wang, Lu and Raghavan, Hema and Castelli, Vittorio and Florian, Radu and Cardie, Claire
Experimental Setup
Documents are processed by a full NLP pipeline, including token and sentence segmentation, parsing, semantic role labeling, and an information extraction pipeline consisting of mention detection, NP coreference , cross-document resolution, and relation detection (Florian et al., 2004; Luo et al., 2004; Luo and Zitouni, 2005).
The Framework
Finally, the postprocessing stage applies coreference resolution and sentence reordering to build the summary.
The Framework
Then we conduct simple query expansion based on the title of the topic and cross-document coreference resolution.
The Framework
And for each mention in the query, we add other mentions within the set of documents that corefer with this mention.
coreference is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Abend, Omri and Rappoport, Ari
The UCCA Scheme
Unlike common practice in grammatical annotation, linkage relations in UCCA can cross sentence boundaries, as can relations represented in other layers (e.g., coreference ).
The UCCA Scheme
Another immediate extension to UCCA’s foundational layer can be the annotation of coreference relations.
The UCCA Scheme
A coreference layer would annotate a relation between “John” and “his” by introducing a new node whose descendants are these two units.
coreference is mentioned in 4 sentences in this paper.
Topics mentioned in this paper: