Abstract | Two main challenges of this task are the dearth of information in a single tweet and the rich entity mention variations. |
Conclusions and Future work | Second, we want to integrate the entity mention normalization techniques as introduced by Liu et al. |
Introduction | In this work, we study the entity linking task for tweets, which maps each entity mention in a tweet to a unique entity, i.e., an entry ID of a knowledge base like Wikipedia. |
Introduction | That means, an entity mention often occurs in many tweets, which allows us to aggregate all related tweets to compute mention-mention similarity and mention-entity similarity. |
Related Work | (2012) propose LIEGE, a framework to link the entities in web lists with the knowledge base, with the assumption that entities mentioned in a Web list tend to be a collection of entities of the same conceptual type. |
Related Work | They propose a machine learning based approach using n-gram features, concept features, and tweet features, to identify concepts semantically related to a tweet, and for every entity mention to generate links to its corresponding Wikipedia article. |
Event Extraction Task | Event argument: an entity mention , temporal expression or value (e.g. |
Joint Framework for Event Extraction | For example, if the nearest entity mention is “Company”, the current token is likely to be Personnel no matter whether it is End-Postion or Start-Position. |
Joint Framework for Event Extraction | In this example, an entity mention is Victim argument to Die event and Target argument to Attack event, and the two event triggers are connected by the typed dependency advcl. |
Joint Framework for Event Extraction | If a partial configuration mistakenly classifies more than one entity mention as Place arguments for the same trigger, then it will be penalized. |
Headline generation | In the end, for each entity mentioned in the document we have a unique identifier, a list with all its mentions in the document and a list of class labels from Freebase. |
Headline generation | GETRELEVANTENTITIES: For each news collection N we collect the set E of the entities mentioned most often within the collection. |
Headline generation | We invoke again INFERENCE, now using at the same time all the patterns extracted for every subset of E, g E. This computes a probability distribution w over all patterns involving any admissible subset of the entities mentioned in the collection. |
Baseline | The arguments are the entity mentions involved in an event mention with a specific role, the relation of an argument to an event where it participates. |
Inferring Inter-Sentence Arguments on Relevant Event Mentions | is the kth event mentions in sentence S<U>; A<iyjykyl> is the lth candidate arguments in event mention T<U,k>; Z is used to denote <i,j,k,l>;f[(EZ) is the score of AI identifying entity mention EZ as an argument, where EZ is the lth entity of the kth event mention of the jth sentence of the ith discourse in document D. fD(EZ, Rm) is the score of RD assigning role Rm to argument E Z. |
Inferring Inter-Sentence Arguments on Relevant Event Mentions | same entity mention . |