Index of papers in Proc. ACL 2014 that mention
  • NER
Tibshirani, Julie and Manning, Christopher D.
Experiments
We now consider the problem of named entity recognition ( NER ) to evaluate how our model performs in a large-scale prediction task.
Experiments
In traditional NER , the goal is to determine whether each word is a person, organization, location, or not a named entity (‘other’).
Experiments
For training, we use a large, noisy NER dataset collected by Jenny Finkel.
Introduction
In experiments on a large, noisy NER dataset, we find that this method can provide an improvement over standard logistic regression when annotation errors are present.
NER is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Zhou, Deyu and Chen, Liangyu and He, Yulan
Experiments
Table 2: Comparison of the performance of event extraction using different NER method.
Experiments
We experimented with two approaches for named entity recognition ( NER ) in preprocessing.
Experiments
One is to use the NER tool trained specifically on the Twitter data (Ritter et al., 2011), denoted as “TW-NER” in Table 2.
Methodology
Named entity recognition ( NER ) is a crucial step since the results would directly impact the final extracted 4-tuple (y,d, l, It is not easy to accurately identify named entities in the Twitter data since tweets contain a lot of misspellings and abbreviations.
Methodology
First, a traditional NER tool such as the Stanford Named Entity Recognizer2 is used to identify named entities from the news articles crawled from BBC and CNN during the same period that the tweets were published.
NER is mentioned in 6 sentences in this paper.
Topics mentioned in this paper: