Index of papers in Proc. ACL that mention
  • coreference resolution
Yang, Xiaofeng and Su, Jian and Lang, Jun and Tan, Chew Lim and Liu, Ting and Li, Sheng
Abstract
The traditional mention-pair model for coreference resolution cannot capture information beyond mention pairs for both learning and testing.
Abstract
To deal with this problem, we present an expressive entity-mention model that performs coreference resolution at an entity level.
Abstract
The evaluation on the ACE data set shows that the ILP based entity-mention model is effective for the coreference resolution task.
Introduction
Coreference resolution is the process of linking multiple mentions that refer to the same entity.
Introduction
Most of previous work adopts the mention-pair model, which recasts coreference resolution to a binary classification problem of determining whether or not two mentions in a document are co-referring (e.g.
Introduction
An alternative learning model that can overcome this problem performs coreference resolution based on entity-mention pairs (Luo et al., 2004; Yang et al., 2004b).
Related Work
There are plenty of learning-based coreference resolution systems that employ the mention-pair model.
Related Work
(2004) propose a system that performs coreference resolution by doing search in a large space of entities.
Related Work
(2004b) suggest an entity-based coreference resolution system.
coreference resolution is mentioned in 22 sentences in this paper.
Topics mentioned in this paper:
Stoyanov, Veselin and Gilbert, Nathan and Cardie, Claire and Riloff, Ellen
Introduction
As is common for many natural language processing problems, the state-of-the-art in noun phrase (NP) coreference resolution is typically quantified based on system performance on manually annotated text corpora.
Introduction
MUC-6 (1995), ACE NIST (2004)) and their use in many formal evaluations, as a field we can make surprisingly few conclusive statements about the state-of-the-art in NP coreference resolution .
Introduction
In particular, it remains difi‘icult to assess the effectiveness of diflerent coreference resolution approaches, even in relative terms.
coreference resolution is mentioned in 48 sentences in this paper.
Topics mentioned in this paper:
Martschat, Sebastian
Abstract
We present an unsupervised model for coreference resolution that casts the problem as a clustering task in a directed labeled weighted multigraph.
Introduction
Coreference resolution is the task of determining which mentions in a text refer to the same entity.
Introduction
Quite recently, however, rule-based approaches regained popularity due to Stanford’s multi-pass sieve approach which exhibits state-of-the-art performance on many standard coreference data sets (Raghunathan et al., 2010) and also won the CoNLL-2011 shared task on coreference resolution (Lee et al., 2011; Pradhan et al., 2011).
Introduction
In this paper we present a graph-based approach for coreference resolution that models a document to be processed as a graph.
Related Work
Graph-based coreference resolution .
Related Work
Nicolae and Nicolae (2006) phrase coreference resolution as a graph clustering problem: they first perform pairwise classification and then construct a graph using the derived confidence values as edge weights.
Related Work
(2010) and Cai and Strube (2010) perform coreference resolution in one step using graph partitioning approaches.
coreference resolution is mentioned in 19 sentences in this paper.
Topics mentioned in this paper:
Björkelund, Anders and Kuhn, Jonas
Abstract
We investigate different ways of learning structured perceptron models for coreference resolution when using nonlocal features and beam search.
Background
Coreference resolution is the task of grouping referring expressions (or mentions) in a text into disjoint clusters such that all mentions in a cluster refer to the same entity.
Background
In recent years much work on coreference resolution has been devoted to increasing the ex-pressivity of the classical mention-pair model, in which each coreference classification decision is limited to information about two mentions that make up a pair.
Background
Nevertheless, the two best systems in the latest CoNLL Shared Task on coreference resolution (Pradhan et al., 2012) were both variants of the mention-pair model.
Introducing Nonlocal Features
While beam search and early updates have been successfully applied to other NLP applications, our task differs in two important aspects: First, coreference resolution is a much more difficult task, which relies on more (world) knowledge than what is available in the training data.
Introduction
We show that for the task of coreference resolution the straightforward combination of beam search and early update (Collins and Roark, 2004) falls short of more limited feature sets that allow for exact search.
Introduction
This approach provides a powerful boost to the performance of coreference resolvers , but we find that it does not combine well with the LaSO learning strategy.
Related Work
The perceptron has previously been used to train coreference resolvers either by casting the problem as a binary classification problem that considers pairs of mentions in isolation (Bengtson and Roth, 2008; Stoyanov et al., 2009; Chang et al., 2012, inter alia) or in the structured manner, where a clustering for an entire document is predicted in one go (Fernandes et al., 2012).
Results
For English we also compare it to the Berkeley system (Durrett and Klein, 2013), which, to our knowledge, is the best publicly available system for English coreference resolution (denoted D&K).
coreference resolution is mentioned in 13 sentences in this paper.
Topics mentioned in this paper:
Guinaudeau, Camille and Strube, Michael
Experiments
We also propose to use a coreference resolution system and consider coreferent entities to be the same discourse entity.
Experiments
As the coreference resolution system is trained on well-formed textual documents and expects a correct sentence ordering, we use in all our experiments only features that do not rely on sentence order (e.g.
Experiments
Second, we want to evaluate the influence of automatically performed coreference resolution in a controlled fashion.
The Entity Grid Model
Finally, they include a heuristic coreference resolution component by linking mentions which share a
coreference resolution is mentioned in 15 sentences in this paper.
Topics mentioned in this paper:
Bansal, Mohit and Klein, Dan
Abstract
To address semantic ambiguities in coreference resolution , we use Web n-gram features that capture a range of world knowledge in a diffuse but robust way.
Abstract
When added to a state-of-the-art coreference baseline, our Web features give significant gains on multiple datasets (ACE 2004 and ACE 2005) and metrics (MUC and B3), resulting in the best results reported to date for the end-to-end task of coreference resolution .
Baseline System
Reconcile is one of the best implementations of the mention-pair model (Soon et al., 2001) of coreference resolution .
Experiments
We show results on three popular and comparatively larger coreference resolution data sets — the ACE04, ACE05, and ACE05-ALL datasets from the ACE Program (NIST, 2004).
Introduction
Many of the most difficult ambiguities in coreference resolution are semantic in nature.
Introduction
There have been multiple previous systems that incorporate some form of world knowledge in coreference resolution tasks.
Introduction
There is also work on end-to-end coreference resolution that uses large noun-similarity lists (Daumé III and Marcu, 2005) or structured knowledge bases such as Wikipedia (Yang and Su, 2007; Haghighi and Klein, 2009; Kobdani et al., 2011) and YAGO (Rahman and Ng, 2011).
Semantics via Web Features
Our Web features for coreference resolution are simple and capture a range of diffuse world knowledge.
Semantics via Web Features
datasets for end-to-end coreference resolution (see Section 4.3).
Semantics via Web Features
This keeps the total number of features small, which is important for the relatively small datasets used for coreference resolution .
coreference resolution is mentioned in 13 sentences in this paper.
Topics mentioned in this paper:
Liu, Changsong and She, Lanbo and Fang, Rui and Chai, Joyce Y.
Evaluation and Discussion
With no surprise, the coreference resolution performance plays an important role in the final grounding performance (see the grounding performance of using manually annotated coreference in the bottom part of Table 1).
Evaluation and Discussion
Due to the simplicity of our current coreference classifier and the flexibility of the human-human dialogue in the data, the pairwise coreference resolution only achieves 0.74 in precision and 0.43 in recall.
Evaluation and Discussion
The low recall of coreference resolution makes it difficult to link interrelated referring expressions and resolve them jointly.
Probabilistic Labeling for Reference Grounding
Our system first processes the data using automatic semantic parsing and coreference resolution .
Probabilistic Labeling for Reference Grounding
We then perform pairwise coreference resolution on the discourse entities to find out the discourse relations between entities from different utterances.
Probabilistic Labeling for Reference Grounding
Based on the semantic parsing and pairwise coreference resolution results, our system further builds a graph representation to capture the collaborative discourse and formulate referential grounding as a probabilistic labeling problem, as described next.
coreference resolution is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Dubey, Amit
Abstract
This paper introduces a novel sentence processing model that consists of a parser augmented with a probabilistic logic-based model of coreference resolution , which allows us to simulate how context interacts with syntax in a reading task.
Introduction
There are three main parts of the model: a syntactic processor, a coreference resolution system, and a simple pragmatics processor which computes certain limited forms of discourse coherence.
Introduction
The coreference resolution system is implemented
Model
The model comprises three parts: a parser, a coreference resolution system, and a pragmatics subsystem.
Model
The primary function of the discourse processing module is to perform coreference resolution for each mention in an incrementally processed text.
Model
Note that, unlike Huang eta1., we assume an ordering on c and y if Coref(x, y) is true: 3/ must occur earlier in the document than c. The remaining predicates in Table 1 are a subset of features used by other coreference resolution systems (cf.
coreference resolution is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Mirkin, Shachar and Dagan, Ido and Pado, Sebastian
Background
A number of systems have tried to address the question of coreference in RTE as a preprocessing step prior to inference proper, with most systems using off-the-shelf coreference resolvers such as JavaRap (Qiu et al., 2004) or OpenNLP3.
Background
Results were inconclusive, however, with several reports about errors introduced by automatic coreference resolution (Agichtein et al., 2008; Adams et al., 2007).
Background
Specific evaluations of the contribution of coreference resolution yielded both small negative (Bar-Haim et al., 2008) and insignificant positive (Chambers et al., 2007) results.
Conclusions
While semantic knowledge (e.g., from WordNet or Wikipedia) has been used beneficially for coreference resolution (Soon et al., 2001; Ponzetto and Strube, 2006), reference resolution has, to our knowledge, not yet been employed to validate entailment rules’ applicability.
Introduction
E.g., in Example 1 above, knowing that Kennedy was a president can alleviate the need for coreference resolution .
Introduction
Conversely, coreference resolution can often be used to overcome gaps in entailment knowledge.
Motivation and Goals
sented; (2) the off-the-shelf coreference resolution systems which may have been not robust enough; (3) the limitation to nominal coreference; and (4) overly simple integration of reference information into the inference engines.
Results
Table 2 shows that 77% of all focus terms and 86% of the reference terms were nominal phrases, which justifies their prominent position in work on anaphora and coreference resolution .
Results
This result reaffirms the usefulness of cross-document coreference resolution for inference (Huang et al., 2009).
coreference resolution is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Kobdani, Hamidreza and Schuetze, Hinrich and Schiehlen, Michael and Kamp, Hans
Abstract
In this paper, we present an unsupervised framework that bootstraps a complete coreference resolution (CoRe) system from word associations mined from a large unlabeled corpus.
Conclusion
In this paper, we have demonstrated the utility of association information for coreference resolution .
Introduction
Coreference resolution (CoRe) is the process of finding markables (noun phrases) referring to the same real world entity or concept.
Introduction
Our experiments are conducted using the MCORE system (“Modular COreference REsolution” ).1 MCORE can operate in three different settings: unsupervised (subsystem A-INF), supervised (subsystem SUCRE (Kobdani and Schutze, 2010)), and self-trained (subsystem UNSEL).
Introduction
SUCRE (“SUpervised Coreference REsolution” ) is trained on a labeled corpus (manually or automatically labeled) similar to standard CoRe systems.
Related Work
(2002) used co-training for coreference resolution , a semi-supervised method.
System Architecture
We take a self-training approach to coreference resolution : We first label the corpus using the unsupervised model A-INF and then train the supervised model SUCRE on this automatically labeled training corpus.
coreference resolution is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Bergsma, Shane and Lin, Dekang and Goebel, Randy
Conclusion
A consequence of this research was the creation of It-Bank, a collection of thousands of labelled examples of the pronoun it, which will benefit other coreference resolution researchers.
Conclusion
Another avenue of study will look at the interaction between coreference resolution and machine translation.
Evaluation
Standard coreference resolution data sets annotate all noun phrases that have an antecedent noun phrase in the text.
Introduction
The goal of coreference resolution is to determine which noun phrases in a document refer to the same real-world entity.
Introduction
As part of this task, coreference resolution systems must decide which pronouns refer to preceding noun phrases (called antecedents) and which do not.
Introduction
In sentence (1), it is an anaphoric pronoun referring to some previous noun phrase, like “the sauce” or “an appointment.” In sentence (2), it is part of the idiomatic expression “make it” meaning “succeed.” A coreference resolution system should find an antecedent for the first it but not the second.
Related Work
First of all, research in coreference resolution has shown the benefits of modules for general noun anaphoricity determination (Ng and Cardie, 2002; Denis and Baldridge, 2007).
Results
Notably, the first noun-phrase before the context is the word “software.” There is strong compatibility between the pronoun-parent “install” and the candidate antecedent “software.” In a full coreference resolution system, when the anaphora resolution module has a strong preference to link it to an antecedent (which it should when the pronoun is indeed referential), we can override a weak non-referential probability.
coreference resolution is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Iida, Ryu and Poesio, Massimo
Introduction
The felicitousness of zero anaphoric reference depends on the referred entity being sufficiently salient, hence this type of data—particularly in Japanese and Italian—played a key role in early work in coreference resolution , e.g., in the development of Centering (Kameyama, 1985; Walker et a1., 1994; Di Eugenio, 1998).
Introduction
We integrate the zero anaphora resolver with a coreference resolver and demonstrate that the approach leads to improved results for both Italian and Japanese.
Introduction
In Section 5 we discuss experiments testing that adding our zero anaphora detector and resolver to a full coreference resolver would result in overall increase in performance.
coreference resolution is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Lassalle, Emmanuel and Denis, Pascal
Conclusion and perspectives
method to optimize the pairwise model of a coreference resolution system.
Experiments
This is a rather idealized setting but our focus is on comparing various pairwise local models rather than on building a full coreference resolution system.
Introduction
Coreference resolution is the problem of partitioning a sequence of noun phrases (or mentions), as they occur in a natural language text, into a set of referential entities.
Introduction
In this kind of architecture, the performance of the entire coreference system strongly depends on the quality of the local pairwise classifier.1 Consequently, a lot of research effort on coreference resolution has focused on trying to boost the performance of the pairwise classifier.
Modeling pairs
For instance, some coreference resolution systems process different kinds of anaphors separately, which suggests for example that pairs containing an anaphoric pronoun behave differently from pairs with non-
Modeling pairs
From this formal point of view, the task of coreference resolution consists in fixing of, obserVing labeled samples (in): y)t}t€TrainSet and, given partially observed new variables
coreference resolution is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Mazidi, Karen and Nielsen, Rodney D.
Approach
Coreference resolution , which could help avoid vague question generation, is discussed in Section 5.
Linguistic Challenges
Here we briefly describe three challenges: negation detection, coreference resolution , and verb forms.
Linguistic Challenges
5.2 Coreference Resolution
Linguistic Challenges
Currently, our system does not use any type of coreference resolution .
coreference resolution is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Andrews, Nicholas and Eisner, Jason and Dredze, Mark
Abstract
In this paper, we propose a model for cross-document coreference resolution that achieves robustness by learning similarity from unlabeled data.
Conclusions
Our primary contribution consists of new modeling ideas, and associated inference techniques, for the problem of cross-document coreference resolution .
Introduction
In this paper, we propose a method for jointly (1) learning similarity between names and (2) clustering name mentions into entities, the two major components of cross-document coreference resolution systems (Baron and Freedman, 2008; Finin et al., 2009; Rao et al., 2010; Singh et al., 2011; Lee et al., 2012; Green et al., 2012).
Overview and Related Work
Cross-document coreference resolution (CDCR) was first introduced by Bagga and Baldwin (1998b).
Overview and Related Work
Name similarity is also an important component of within-document coreference resolution , and efforts in that area bear resemblance to our approach.
coreference resolution is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Bamman, David and Underwood, Ted and Smith, Noah A.
Data
While previous work uses the Stanford CoreNLP toolkit to identify characters and extract typed dependencies for them, we found this approach to be too slow for the scale of our data (a total of 1.8 billion tokens); in particular, syntactic parsing, with cubic complexity in sentence length, and out-of-the-box coreference resolution (with thousands of potential antecedents) prove to be
Data
3.2 Pronominal Coreference Resolution
Data
While the character clustering stage is essentially performing proper noun coreference resolution , approximately 74% of references to characters in books come in the form of pronouns.5 To resolve this more difficult class at the scale of an entire book, we train a log-linear discriminative classifier only on the task of resolving pronominal anaphora (i.e., ignoring generic noun phrases such as the paint or the rascal).
Introduction
(2013) explicitly learn character types (or “personas”) in a dataset of Wikipedia movie plot summaries; and entity-centric models form one dominant approach in coreference resolution (Durrett et al., 2013; Haghighi and Klein, 2010).
coreference resolution is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Luo, Xiaoqiang and Pradhan, Sameer and Recasens, Marta and Hovy, Eduard
Introduction
Coreference resolution aims at identifying natural language expressions (or mentions) that refer to the same entity.
Introduction
A critically important problem is how to measure the quality of a coreference resolution system.
Introduction
Therefore, the identical-mention-set assumption limits BLANC-gold’s applicability when gold mentions are not available, or when one wants to have a single score measuring both the quality of mention detection and coreference resolution .
Original BLANC
When Tk, = Tr, Rand Index can be applied directly since coreference resolution reduces to a clustering problem where mentions are partitioned into clusters (entities):
coreference resolution is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Wang, Lu and Raghavan, Hema and Castelli, Vittorio and Florian, Radu and Cardie, Claire
The Framework
Finally, the postprocessing stage applies coreference resolution and sentence reordering to build the summary.
The Framework
Then we conduct simple query expansion based on the title of the topic and cross-document coreference resolution .
The Framework
Cross-document coreference resolution , semantic role labeling and relation extraction are accomplished Via the methods described in Section 5.
coreference resolution is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Krishnamurthy, Jayant and Mitchell, Tom
Prior Work
Concept discovery is also related to coreference resolution (Ng, 2008; Poon and Domingos, 2008).
Prior Work
The difference between the two problems is that coreference resolution finds noun phrases that refer to the same concept within a specific document.
Prior Work
We think the concepts produced by a system like ConceptResolver could be used to improve coreference resolution by providing prior knowledge about noun phrases that can refer to the same concept.
coreference resolution is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Sammons, Mark and Vydiswaran, V.G.Vinod and Roth, Dan
Annotation Proposal and Pilot Study
The results confirmed our initial intuition about some phenomena: for example, that coreference resolution is central to RTE, and that detecting the connecting structure is crucial in discerning negative from positive examples.
Introduction
Tasks such as Named Entity and coreference resolution , syntactic and shallow semantic parsing, and information and relation extraction have been identified as worthwhile tasks and pursued by numerous researchers.
NLP Insights from Textual Entailment
ported by their designers were the use of structured representations of shallow semantic content (such as augmented dependency parse trees and semantic role labels); the application of NLP resources such as Named Entity recognizers, syntactic and dependency parsers, and coreference resolvers ; and the use of special-purpose ad-hoc modules designed to address specific entailment phenomena the researchers had identified, such as the need for numeric reasoning.
NLP Insights from Textual Entailment
As the example in figure 1 illustrates, most RTE examples require a number of phenomena to be correctly resolved in order to reliably determine the correct label (the Interaction problem); a perfect coreference resolver might as a result yield little improvement on the standard RTE evaluation, even though coreference resolution is clearly required by human readers in a significant percentage of RTE examples.
coreference resolution is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Raghavan, Preethi and Fosler-Lussier, Eric and Elhadad, Noémie and Lai, Albert M.
Problem Description
Thus, in order to align event sequences, we need to compute scores corresponding to cross-narrative medical event coreference resolution and cross-narrative temporal relations.
Problem Description
4 Cross-Narrative Coreference Resolution and Temporal Relation Learning
Problem Description
The coreference resolution performs with 71.5% precision and 82.3% recall.
coreference resolution is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Durrett, Greg and Hall, David and Klein, Dan
Abstract
Efficiently incorporating entity-level information is a challenge for coreference resolution systems due to the difficulty of exact inference over partitions.
Conclusion
Our transitive system is more effective at using properties than a pairwise system and a previous entity-level system, and it achieves performance comparable to that of the Stanford coreference resolution system, the winner of the CoNLL 2011 shared task.
Introduction
The inclusion of entity-level features has been a driving force behind the development of many coreference resolution systems (Luo et al., 2004; Rahman and Ng, 2009; Haghighi and Klein, 2010; Lee et al., 2011).
coreference resolution is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Cheung, Jackie Chi Kit and Penn, Gerald
Introduction
We are not aware of similarly well-tested, publicly available coreference resolution systems that handle all types of anaphora for German.
Introduction
We considered adapting the BART coreference resolution toolkit (Versley et a1., 2008) to German, but a number of language-dependent decisions regarding preprocessing, feature engineering, and the learning paradigm would need to be made in order to achieve reasonable performance comparable to state-of-the—art English coreference resolution systems.
Introduction
The model also shows promise for other discourse-related tasks such as coreference resolution and discourse parsing.
coreference resolution is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Ji, Heng and Grishman, Ralph
Conclusion and Future Work
The aggregation approach described here can be easily extended to improve relation detection and coreference resolution (two argument mentions referring to the same role of related events are likely to corefer).
Related Work
Almost all the current event extraction systems focus on processing single documents and, except for coreference resolution , operate a sentence at a time (Grishman et al., 2005; Ahn, 2006; Hardy et al., 2006).
Task and Baseline System
2 In this paper we don’t consider event mention coreference resolution and so don’t distinguish event mentions and events.
coreference resolution is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: