Abstract | In this paper, we propose a model for cross-document coreference resolution that achieves robustness by learning similarity from unlabeled data. |
Introduction | even identical—do not necessarily corefer . |
Introduction | In this paper, we propose a method for jointly (1) learning similarity between names and (2) clustering name mentions into entities, the two major components of cross-document coreference resolution systems (Baron and Freedman, 2008; Finin et al., 2009; Rao et al., 2010; Singh et al., 2011; Lee et al., 2012; Green et al., 2012). |
Introduction | Such creative spellings are especially common on Twitter and other social media; we give more examples of coreferents learned by our model in Section 8.4. |
Overview and Related Work | Cross-document coreference resolution (CDCR) was first introduced by Bagga and Baldwin (1998b). |
Overview and Related Work | Most approaches since then are based on the intuitions that coreferent names tend to have “similar” spellings and tend to appear in “similar” contexts. |
Overview and Related Work | We adopt a “phylogenetic” generative model of coreference . |
Abstract | We investigate different ways of learning structured perceptron models for coreference resolution when using nonlocal features and beam search. |
Background | Coreference resolution is the task of grouping referring expressions (or mentions) in a text into disjoint clusters such that all mentions in a cluster refer to the same entity. |
Background | In recent years much work on coreference resolution has been devoted to increasing the ex-pressivity of the classical mention-pair model, in which each coreference classification decision is limited to information about two mentions that make up a pair. |
Background | This shortcoming has been addressed by entity-mention models, which relate a candidate mention to the full cluster of mentions predicted to be coreferent so far (for more discussion on the model types, see, e.g., (Ng, 2010)). |
Introduction | We show that for the task of coreference resolution the straightforward combination of beam search and early update (Collins and Roark, 2004) falls short of more limited feature sets that allow for exact search. |
Introduction | Coreferent mentions in a document are usually annotated as sets of mentions, where all mentions in a set are coreferent . |
Introduction | This approach provides a powerful boost to the performance of coreference resolvers, but we find that it does not combine well with the LaSO learning strategy. |
Introduction | From a system-to-system perspective, wikification has demonstrated its usefulness in a variety of applications, including coreference resolution (Ratinov and Roth, 2012) and classification (Vitale et al., 2012). |
Principles and Approach Overview | _ _. Coreference . ' |
Principles and Approach Overview | Principle 2 (Coreference): Two coreferential mentions should be linked to the same concept. |
Principles and Approach Overview | For example, if we know “nc” and “North Carolina” are coreferential , then they should both be linked to North Carolina. |
Relational Graph Construction | In this subsection, we introduce the concept meta path which will be used to detect coreference (section 4.3) and semantic relatedness relations (section 4.4). |
Relational Graph Construction | 4.3 Coreference |
Relational Graph Construction | A coreference relation (Principle 2) usually occurs across multiple tweets due to the highly redundant information in Twitter. |
Evaluation and Discussion | We first applied the semantic parser and coreference classifier as described in Section 4.1 to process each dialogue, and then built a graph representation based on the automatic processing results at the end of the dialogue. |
Probabilistic Labeling for Reference Grounding | Our system first processes the data using automatic semantic parsing and coreference resolution. |
Probabilistic Labeling for Reference Grounding | We then perform pairwise coreference resolution on the discourse entities to find out the discourse relations between entities from different utterances. |
Probabilistic Labeling for Reference Grounding | Based on the semantic parsing and pairwise coreference resolution results, our system further builds a graph representation to capture the collaborative discourse and formulate referential grounding as a probabilistic labeling problem, as described next. |
Abstract | BLANC is a link-based coreference evaluation metric for measuring the quality of coreference systems on gold mentions. |
Introduction | Coreference resolution aims at identifying natural language expressions (or mentions) that refer to the same entity. |
Introduction | A critically important problem is how to measure the quality of a coreference resolution system. |
Introduction | In particular, MUC measures the degree of agreement between key coreference links (i.e., links among mentions within entities) and response coreference links, while non-coreference links (i.e., links formed by mentions from different entities) are not explicitly taken into account. |
Notations | Let and Or be the set of coreference links formed by mentions in 19, and 73-: |
Notations | Note that when an entity consists of a single mention, its coreference link set is empty. |
Original BLANC | When Tk, = Tr, Rand Index can be applied directly since coreference resolution reduces to a clustering problem where mentions are partitioned into clusters (entities): |
Abstract | The cross-narrative coreference and temporal relation weights used in both these approaches are learned from a corpus of clinical narratives. |
Introduction | These cross-narrative coreferences act as important anchors for reasoning with information across narratives. |
Introduction | We leverage cross-narrative coreference information along with confident cross-narrative temporal relation predictions and learn to align and temporally order medical event sequences across longitudinal clinical narratives. |
Introduction | The cross-narrative coreference and temporal relation scores used in both these approaches are learned from a corpus of patient narratives from The Ohio State University Wexner Medical Center. |
Problem Description | elstart 2 628mm; and elstop = 6287501,, when 61 and 62 corefer . |
Problem Description | Thus, in order to align event sequences, we need to compute scores corresponding to cross-narrative medical event coreference resolution and cross-narrative temporal relations. |
Problem Description | 4 Cross-Narrative Coreference Resolution and Temporal Relation Learning |
Related Work | We use dynamic programming to compute the best alignment, given the temporal and coreference information between medical events across these sequences. |
Approach | Opinion Coreference Sentences in a discourse can be linked by many types of coherence relations (Jurafsky et al., 2000). |
Approach | Coreference is one of the commonly used relations in written text. |
Approach | In this work, we explore coreference in the context of sentence-level sentiment analysis. |
Introduction | (2008) defines coreference relations on opinion targets and applies them to constrain the polarity of sentences. |
Data | While previous work uses the Stanford CoreNLP toolkit to identify characters and extract typed dependencies for them, we found this approach to be too slow for the scale of our data (a total of 1.8 billion tokens); in particular, syntactic parsing, with cubic complexity in sentence length, and out-of-the-box coreference resolution (with thousands of potential antecedents) prove to be |
Data | It includes the following components for clustering character name mentions, resolving pronominal coreference , and reducing vocabulary dimensionality. |
Data | 3.2 Pronominal Coreference Resolution |
Introduction | (2013) explicitly learn character types (or “personas”) in a dataset of Wikipedia movie plot summaries; and entity-centric models form one dominant approach in coreference resolution (Durrett et al., 2013; Haghighi and Klein, 2010). |
Approach | Coreference resolution, which could help avoid vague question generation, is discussed in Section 5. |
Linguistic Challenges | Here we briefly describe three challenges: negation detection, coreference resolution, and verb forms. |
Linguistic Challenges | 5.2 Coreference Resolution |
Linguistic Challenges | Currently, our system does not use any type of coreference resolution. |
Background | 3Throughout this paper we refer to relation mention as relation since we do not consider relation mention coreference . |
Experiments | Roth (2011), we excluded the D I SC relation type, and removed relations in the system output which are implicitly correct via coreference links for fair comparison. |
Features | Coreference consistency Coreferential entity mentions should be assigned the same entity type. |
Features | We determine high-recall coreference links between two segments in the same sentence using some simple heuristic rules: |
Features | Then we encode a global feature to check whether two coreferential segments share the same entity type. |
Generating On-the-fly Knowledge | For a TH pair, apply dependency parsing and coreference resolution. |
Generating On-the-fly Knowledge | Parsing H Abstract I T/H Coreference DCS trees denotations - |
The Idea | DCS trees can be extended to represent linguistic phenomena such as quantification and coreference , with additional markers introducing additional operations on tables. |
The Idea | Coreference We use Stanford CoreNLP to resolve coreferences (Raghunathan et al., 2010), whereas coreference is implemented as a special type of selection. |
Summarizing Within the Hierarchy | An edge from sentence 3,- to sj with positive weight indicates that sj may follow 3,- in a coherent summary, e. g. continued mention of an event or entity, or coreference link between 3,- and sj. |
Summarizing Within the Hierarchy | A negative edge indicates an unfulfilled discourse cue or coreference mention. |
Summarizing Within the Hierarchy | These are coreference mentions 0r discourse cues where none of the sentences read before (either in an ancestor summary or in the current summary) contain an antecedent: |