Introduction | The authors remark that extracted sentences with VFs that are referentially related to previous context (e. g., they contain a coreferential noun phrase or a discourse relation like “therefore”) are reinserted at higher accuracies. |
Introduction | The main focus of that work, however, was to adapt the model for use in a low-resource situation when perfect coreference information is not available. |
Introduction | Table 3: Accuracy of automatic annotations of noun phrases with coreferents . |
Abstract | This paper introduces a novel sentence processing model that consists of a parser augmented with a probabilistic logic-based model of coreference resolution, which allows us to simulate how context interacts with syntax in a reading task. |
Introduction | This is the first model we know of which introduces a broad-coverage sentence processing model which takes the effect of coreference and discourse into account. |
Introduction | There are three main parts of the model: a syntactic processor, a coreference resolution system, and a simple pragmatics processor which computes certain limited forms of discourse coherence. |
Introduction | The coreference resolution system is implemented |
Model | The model comprises three parts: a parser, a coreference resolution system, and a pragmatics subsystem. |
Model | However, as the coreference processor takes trees as input, we must therefore unpack parses before resolving referential ambiguity. |
Model | the agent), get the -LGS label; (iv) non-recursive NPs are renamed NPbase (the coreference system treats each NPbase as a markable). |
Conclusions and future work | Although we consistently observed development gains from using automatic coreference resolution, this process creates errors that need to be studied more closely. |
Discussion | First, the system identified coreferent mentions of Olivetti that participated in exporting and supplying events (not shown). |
Implicit argument identification | A candidate constituent c will often form a coreference chain with other constituents in the discourse. |
Implicit argument identification | When determining whether 0 is the iargg of investment, one can draw evidence from other mentions in 0’s coreference chain. |
Implicit argument identification | Thus, the unit of classification for a candidate constituent c is the three-tuple (p, iargn, c’), where c’ is a coreference chain comprising 0 and its coreferent constituents.3 We defined a binary classification function Pr(+| (p,iargn,c’ that predicts the probability that the entity referred to by c fills the missing argument position iargn of predicate instance p. In the remainder of this paper, we will refer to c as the primary filler, differentiating it from other mentions in the coreference chain c’ . |
Related work | (2005) suggested approaches to implicit argument identification based on observed coreference patterns; however, the authors did not implement and evaluate such methods. |
Related work | analysis of naturally occurring coreference patterns to aid implicit argument identification. |
Abstract | Discourse references, notably coreference and bridging, play an important role in many text understanding applications, but their impact on textual entailment is yet to be systematically understood. |
Background | The simplest form of information that discourse provides is coreference , i.e., information that two linguistic expressions refer to the same entity or event. |
Background | Coreference is particularly important for processing pronouns and other anaphoric expressions, such as he in Example 1. |
Background | While coreference indicates equivalence, bridging points to the existence of a salient semantic relation between two distinct entities or events. |
Introduction | The detection and resolution of discourse references such as coreference and bridging anaphora play an important role in text understanding applications, like question answering and information extraction. |
Introduction | The understanding that the second sentence of the text entails the hypothesis draws on two coreference relationships, namely that he is Oswald, and |
Introduction | However, the utilization of discourse information for such inferences has been so far limited mainly to the substitution of nominal coreferents , while many aspects of the interface between discourse and semantic inference needs remain unexplored. |
Abstract | Focus, coherence and referential clarity are best evaluated by a class of features measuring local coherence on the basis of cosine similarity between sentences, coreference information, and summarization specific features. |
Indicators of linguistic quality | This class of linguistic quality indicators is a combination of factors related to coreference , adjacent sentence similarity, and summary-specific context of surface cohesive devices. |
Indicators of linguistic quality | Coreference Steinberger et al. |
Indicators of linguistic quality | (2007) compare the coreference chains in input documents and in summaries in order to locate potential problems. |
Results and discussion | For all four other questions, the best feature set is Continuity, which is a combination of summarization specific features, coreference features and cosine similarity of adjacent sentences. |
Results and discussion | We now investigate to what extent each of its components—summary-specific features, coreference , and cosine similarity between adjacent sentences—contribute to performance. |
Results and discussion | However, the coreference features do not seem to contribute much towards predicting summary linguistic quality. |
Annotation Proposal and Pilot Study | From the tables it is apparent that good performance on a range of phenomena in our inference model are likely to have a significant effect on RTE results, with coreference being deemed essential to the inference process for 35% of examples, and a number of other phenomena are sufficiently well represented to merit near-future attention (assuming that RTE systems do not already handle these phenomena, a question we address in section 4). |
Annotation Proposal and Pilot Study | Phenomenon Occurrence Agreement coreference 35.00% 0.698 simple rewrite rule 32.62% 0.580 lexical relation 25.00% 0.738 implicit relation 23.33% 0.633 factoid 15.00% 0.412 parent-sibling 1 1.67% 0.500 genetive relation 9.29% 0.608 nominalization 8.33% 0.514 event chain 6.67% 0.589 coerced relation 6.43% 0.540 passive-active 5.24% 0.583 numeric reasoning 4.05% 0.847 spatial reasoning 3.57% 0.720 |
Annotation Proposal and Pilot Study | The results confirmed our initial intuition about some phenomena: for example, that coreference resolution is central to RTE, and that detecting the connecting structure is crucial in discerning negative from positive examples. |
Introduction | Tasks such as Named Entity and coreference resolution, syntactic and shallow semantic parsing, and information and relation extraction have been identified as worthwhile tasks and pursued by numerous researchers. |
Introduction | relevant NLP tasks such as NER, Coreference , parsing, data acquisition and application, and others. |
NLP Insights from Textual Entailment | ported by their designers were the use of structured representations of shallow semantic content (such as augmented dependency parse trees and semantic role labels); the application of NLP resources such as Named Entity recognizers, syntactic and dependency parsers, and coreference resolvers; and the use of special-purpose ad-hoc modules designed to address specific entailment phenomena the researchers had identified, such as the need for numeric reasoning. |
NLP Insights from Textual Entailment | As the example in figure 1 illustrates, most RTE examples require a number of phenomena to be correctly resolved in order to reliably determine the correct label (the Interaction problem); a perfect coreference resolver might as a result yield little improvement on the standard RTE evaluation, even though coreference resolution is clearly required by human readers in a significant percentage of RTE examples. |
Extracting Conversational Networks from Literature | We then clustered the noun phrases into coreferents for the same entity (person or organization). |
Extracting Conversational Networks from Literature | For each named entity, we generate variations on the name that we would expect to see in a coreferent . |
Extracting Conversational Networks from Literature | For each named entity, we compile a list of other named entities that may be coreferents , either because they are identical or because one is an expected variation on the other. |
Cross-event Approach | For every event, we collect its trigger and event type; for every argument, we use coreference information and record every entity and its role(s) in events of a certain type. |
Task Description | ( coreferential ) entity mentions. |
Task Description | Event extraction depends on previous phases entity mention classification and coreference . |
Task Description | Note that entity mentions that share the same EntityID are coreferential and treated as the same object. |
Introduction | The task of identifying reference relations including anaphora and coreferences within texts has received a great deal of attention in natural language processing, from both theoretical and empirical perspectives. |
Introduction | In these data sets, coreference relations are defined as a limited version of a typical coreference; this generally means that only the relations where expressions refer to the same named entities are addressed, because it makes the coreference resolution task more information extraction-oriented. |
Introduction | In other words, the coreference task as defined by MUC and ACE is geared toward only identifying coreference relations anchored to an entity within the text. |
Reference Resolution using Extra-linguistic Information | These features have been examined by approaches to anaphora or coreference resolution (Soon et al., 2001; Ng and Cardie, 2002, etc.) |