Abstract | We present an ILP-based model of zero anaphora detection and resolution that builds on the joint determination of anaphoricity and coreference model proposed by Denis and Baldridge (2007), but revises it and extends it into a three-way ILP problem also incorporating subject detection. |
Introduction | The felicitousness of zero anaphoric reference depends on the referred entity being sufficiently salient, hence this type of data—particularly in Japanese and Italian—played a key role in early work in coreference resolution, e.g., in the development of Centering (Kameyama, 1985; Walker et a1., 1994; Di Eugenio, 1998). |
Introduction | (2010)), and their use in competitions such as SEMEVAL 2010 Task 1 on Multilingual Coreference (Recasens et a1., 2010), is leading to a renewed interest in zero anaphora resolution, particularly at the light of the mediocre results obtained on zero anaphors by most systems participating in SEMEVAL. |
Introduction | We integrate the zero anaphora resolver with a coreference resolver and demonstrate that the approach leads to improved results for both Italian and Japanese. |
Abstract | In this paper, we present an unsupervised framework that bootstraps a complete coreference resolution (CoRe) system from word associations mined from a large unlabeled corpus. |
Abstract | We show that word associations are useful for CoRe — e. g., the strong association between Obama and President is an indicator of likely coreference . |
Introduction | Coreference resolution (CoRe) is the process of finding markables (noun phrases) referring to the same real world entity or concept. |
Introduction | Until recently, most approaches tried to solve the problem by binary classification, where the probability of a pair of markables being coreferent is estimated from labeled data. |
Introduction | Alternatively, a model that determines whether a markable is coreferent with a preceding cluster can be used. |
Related Work | We use the term semi-supervised for approaches that use some amount of human-labeled coreference pairs. |
Related Work | (2002) used co-training for coreference resolution, a semi-supervised method. |
Abstract | Cross-document coreference , the task of grouping all the mentions of each entity in a document collection, arises in information extraction and automated knowledge base construction. |
Abstract | To solve the problem we propose two ideas: (a) a distributed inference technique that uses parallelism to enable large scale processing, and (b) a hierarchical model of coreference that represents uncertainty over multiple granular—ities of entities to facilitate more effective approximate inference. |
Introduction | Given a collection of mentions of entities extracted from a body of text, coreference or entity resolution consists of clustering the mentions such that two mentions belong to the same cluster if and only if they refer to the same entity. |
Introduction | While significant progress has been made in within-document coreference (Ng, 2005; Culotta et al., 2007; Haghighi and Klein, 2007; Bengston and Roth, 2008; Haghighi and Klein, |
Introduction | 2009; Haghighi and Klein, 2010), the larger problem of cross-document coreference has not received as much attention. |
Learning Templates from Raw Text | This paper extends this intuition by introducing a new vector-based approach to coreference similarity. |
Learning Templates from Raw Text | In the sentence, he ran and then he fell, the subjects of run and fall corefer , and so they likely belong to the same scenario-specific semantic role. |
Learning Templates from Raw Text | For instance, arguments of the relation go_ofi”:s were seen coreferring with mentions in plant:o, set_ofi”:o and injures We represent go_ofi”:s as a vector of these relation counts, calling this its coref vector representation. |
Prior Work | Concept discovery is also related to coreference resolution (Ng, 2008; Poon and Domingos, 2008). |
Prior Work | The difference between the two problems is that coreference resolution finds noun phrases that refer to the same concept within a specific document. |
Prior Work | We think the concepts produced by a system like ConceptResolver could be used to improve coreference resolution by providing prior knowledge about noun phrases that can refer to the same concept. |
Experiments | In that work, we also highlight that ACE annotators rarely duplicate a relation link for coreferent mentions. |
Experiments | For instance, assume mentions mi, mj, and mk, are in the same sentence, mentions mi and mj are coreferent , and the annotators tag the mention pair mj, mk, with a particular relation r. The annotators will rarely duplicate the same (implicit) |
Experiments | Of course, using this scoring method requires coreference information, which is available in the ACE data. |