Index of papers in Proc. ACL that mention
  • NER
Kazama, Jun'ichi and Torisawa, Kentaro
Abstract
We demonstrated with the IREX dataset for the Japanese NER that using the constructed clusters as a gazetteer (cluster gazetteer) is a effective way of improving the accuracy of NER .
Abstract
Moreover, we demonstrate that the combination of the cluster gazetteer and a gazetteer extracted from Wikipedia, which is also useful for NER , can further improve the accuracy in several cases.
Gazetteer Induction 2.1 Induction by MN Clustering
Kazama and Torisawa (2007) extracted hyponymy relations from the first sentences (i.e., defining sentences) of Wikipedia articles and then used them as a gazetteer for NER .
Gazetteer Induction 2.1 Induction by MN Clustering
Although this Wikipedia gazetteer is much smaller than the English version used by Kazama and Torisawa (2007) that has over 2,000,000 entries, it is the largest gazetteer that can be freely used for Japanese NER .
Gazetteer Induction 2.1 Induction by MN Clustering
Our experimental results show that this Wikipedia gazetteer can be used to improve the accuracy of Japanese NER .
Introduction
Gazetteers, or entity dictionaries, are important for performing named entity recognition ( NER ) accurately.
Introduction
Most studies using gazetteers for NER are based on the assumption that a gazetteer is a mapping
Introduction
For instance, Kazama and Torisawa (2007) used the hyponymy relations extracted from Wikipedia for the English NER , and reported improved accuracies with such a gazetteer.
Using Gazetteers as Features of NER
Since Japanese has no spaces between words, there are several choices for the token unit used in NER .
Using Gazetteers as Features of NER
The NER task is then treated as a tagging task, which assigns IOB tags to each character in a sentence.10 We use Conditional Random Fields (CRFs) (Lafferty et al., 2001) to perform this tagging.
NER is mentioned in 21 sentences in this paper.
Topics mentioned in this paper:
Darwish, Kareem
Abstract
Some languages lack large knowledge bases and good discriminative features for Name Entity Recognition ( NER ) that can generalize to previously unseen named entities.
Introduction
Named Entity Recognition ( NER ) is essential for a variety of Natural Language Processing (NLP) applications such as information extraction.
Introduction
There has been a fair amount of work on NER for a variety of languages including Arabic.
Introduction
To train an NER system, some of the following feature types are typically used (Benajiba and Rosso, 2008; Nadeau and Sekine, 2009):
NER is mentioned in 31 sentences in this paper.
Topics mentioned in this paper:
Liu, Xiaohua and Zhou, Ming and Zhou, Xiangyang and Fu, Zhongyang and Wei, Furu
Abstract
Two main challenges are the errors propagated from named entity recognition ( NER ) and the dearth of information in a single tweet.
Abstract
We evaluate our method on a manually annotated data set, and show that our method outperforms the baseline that handles these two tasks separately, boosting the F1 from 80.2% to 83.6% for NER , and the Accuracy from 79.4% to 82.6% for NEN, respectively.
Introduction
As a result, the task of named entity recognition ( NER ) for tweets, which aims to identify mentions of rigid designators from tweets belonging to named-entity types such as persons, organizations and locations (2007), has attracted increasing research interest.
Introduction
Traditionally, NEN is regarded as a septated task, which takes the output of NER as its input (Li et al., 2002; Cohen, 2005; Jijkoun et al., 2008; Dai et al., 2011).
Introduction
One limitation of this cascaded approach is that errors propagate from NER to NEN and there is no feedback from NEN to NER .
NER is mentioned in 40 sentences in this paper.
Topics mentioned in this paper:
Rüd, Stefan and Ciaramita, Massimiliano and Müller, Jens and Schütze, Hinrich
Abstract
We use search engine results to address a particularly difficult cross-domain language processing task, the adaptation of named entity recognition ( NER ) from news text to web queries.
Abstract
We achieve strong gains in NER performance on news, in-domain and out-of-domain, and on web queries.
Introduction
In this paper, we use piggyback features to address a particularly hard cross-domain problem, the application of an NER system trained on news to web queries.
Introduction
Thus, applying NER systems trained on news to web queries requires a robust cross-domain approach.
Introduction
The lack of context and capitalization, and the noisiness of real-world web queries (to-kenization irregularities and misspellings) all make NER hard.
NER is mentioned in 40 sentences in this paper.
Topics mentioned in this paper:
LIU, Xiaohua and ZHANG, Shaodian and WEI, Furu and ZHOU, Ming
Abstract
The challenges of Named Entities Recognition ( NER ) for tweets lie in the insufficient information in a tweet and the unavailability of training data.
Introduction
Named Entities Recognition ( NER ) is generally understood as the task of identifying mentions of rigid designators from text belonging to named-entity types such as persons, organizations and locations (Nadeau and Sekine, 2007).
Introduction
Proposed solutions to NER fall into three categories: 1) The rule-based (Krupka and Hausman, 1998); 2) the machine learning based (Finkel and Manning, 2009; Singh et al., 2010) ; and 3) hybrid methods (J ansche and Abney, 2002).
Introduction
However, current NER mainly focuses on formal text such as news articles (Mccallum and Li, 2003; Etzioni et al., 2005).
Related Work
Related work can be roughly divided into three categories: NER on tweets, NER on non-tweets (e.g., news, biological medicine, and clinical notes), and semi-supervised learning for NER .
Related Work
2.1 NER on Tweets
NER is mentioned in 32 sentences in this paper.
Topics mentioned in this paper:
Turian, Joseph and Ratinov, Lev-Arie and Bengio, Yoshua
Abstract
We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking.
Clustering-based word representations
Brown clusters have been used successfully in a variety of NLP applications: NER (Miller et al., 2004; Liang, 2005; Ratinov & Roth, 2009), PCFG parsing (Candito & Crabbe, 2009), dependency parsing (Koo et al., 2008; Suzuki et al., 2009), and semantic dependency parsing (Zhao et al., 2009).
Distributional representations
It is not well-understood what settings are appropriate to induce distributional word representations for structured prediction tasks (like parsing and MT) and sequence labeling tasks (like chunking and NER ).
Introduction
In this work, we compare different techniques for inducing word representations, evaluating them on the tasks of named entity recognition ( NER ) and chunking.
Supervised evaluation tasks
Lin and Wu (2009) finds that the representations that are good for NER are poor for search query classification, and Vice-versa.
Supervised evaluation tasks
We apply clustering and distributed representations to NER and chunking, which allows us to compare our semi-supervised models to those of Ando and Zhang (2005) and Suzuki and Isozaki (2008).
Supervised evaluation tasks
NER is typically treated as a sequence prediction problem.
Unlabled Data
For this reason, NER results that use RCVl word representations are a form of transductive learning.
Unlabled Data
(b) NER results.
NER is mentioned in 24 sentences in this paper.
Topics mentioned in this paper:
Wang, Mengqiu and Che, Wanxiang and Manning, Christopher D.
Abstract
Translated bi-texts contain complementary language cues, and previous work on Named Entity Recognition ( NER ) has demonstrated improvements in performance over monolingual taggers by promoting agreement of tagging decisions between the two languages.
Abstract
We observe that NER label information can be used to correct alignment mistakes, and present a graphical model that performs bilingual NER tagging jointly with word alignment, by combining two monolingual tagging models with two unidirectional alignment models.
Abstract
We design a dual decomposition inference algorithm to perform joint decoding over the combined alignment and NER output space.
Bilingual NER by Agreement
We assume access to two monolingual linear-chain CRF-based NER models that are already trained.
Introduction
We study the problem of Named Entity Recognition ( NER ) in a bilingual context, where the goal is to annotate parallel bi-texts with named entity tags.
Introduction
(2012) have also demonstrated that bi-texts annotated with NER tags can provide useful additional training sources for improving the performance of standalone monolingual taggers.
Introduction
In this work, we first develop a bilingual NER model (denoted as BI-NER) by embedding two monolingual CRF-based NER models into a larger undirected graphical model, and introduce additional edge factors based on word alignment (WA).
NER is mentioned in 41 sentences in this paper.
Topics mentioned in this paper:
Finkel, Jenny Rose and Manning, Christopher D.
Base Models
Because we use a tree representation, it is easy to ensure that the features used in the NER model are identical to those in the joint parsing and named entity model, because the joint model (which we will discuss in Section 4.3) is also based on a tree representation where each entity corresponds to a single node in the tree.
Base Models
The joint model shares the NER and parse features with the respective single-task models.
Experiments and Discussion
We did not run this experiment on the CNN portion of the data, because the CNN data was already being used as the extra NER data.
Experiments and Discussion
Looking at the smaller corpora (NBC and MNB) we see the largest gains, with both parse and NER performance improving by about 8% Fl.
Experiments and Discussion
Our one negative result is in the PRI portion: parsing improves slightly, but NER performance decreases by almost 2%.
Hierarchical Joint Learning
PARSE JOINT NER
Hierarchical Joint Learning
There are separate base models for just parsing, just NER, and joint parsing and NER .
Introduction
These high-level systems typically combine the outputs from many low-level systems, such as parsing, named entity recognition ( NER ) and coreference resolution.
NER is mentioned in 14 sentences in this paper.
Topics mentioned in this paper:
Yao, Xuchen and Van Durme, Benjamin and Clark, Peter
Background
Compared to shallow (POS, NER ) structured retrieval, deep structures need more processing power and smoothing, but might also be more precise.
Introduction
This will be our off-the-shelf QA system, which recognizes the association between question type and expected answer types through various features based on e.g., part-of-speech tagging (POS) and named entity recognition ( NER ).
Introduction
For instance, line 2 in Table 1 says that if there is a when question, and the current token’s NER label is DATE, then it is likely that this token is tagged as ANS.
Introduction
IR can easily make use of this knowledge: for a when question, IR retrieves sentences with tokens labeled as DATE by NER , or POS tagged as CD.
Method
We let the trained QA system guide the query formulation when performing coupled retrieval with Indri (Strohman et al., 2005), given a corpus already annotated with POS tags and NER labels.
Method
For instance, the NER tagger we used divides location into two categories: GPE (geo locations) and LOC
Method
Take the previous where question, besides NER[O]=GPE and NER[O]=LOC, we also found oddly N ER[O]=PERSON an important feature, due to that the NER tool sometimes mistakes PERSON for LOC.
NER is mentioned in 15 sentences in this paper.
Topics mentioned in this paper:
Vadas, David and Curran, James R.
Abstract
We also implement novel NER features that generalise the lexical information needed to parse NPs and provide important semantic information.
Experiments
Our experiments are run with the C&C CCG parser (Clark and Curran, 2007b), and will evaluate the changes made to CCGbank, as well as the effectiveness of the NER features.
Experiments
Table 5: Parsing results with NER features
Experiments
5.3 NER features results
Introduction
In particular, we implement new features using NER tags from the BBN Entity Type Corpus (Weischedel and Brunstein, 2005).
Introduction
Applying the NER features results in a total increase of 1.51%.
NER features
Named entity recognition ( NER ) provides information that is particularly relevant for NP parsing, simply because entities are nouns.
NER features
There has also been recent work combining NER and parsing in the biomedical field.
NER features
Lewin (2007) experiments with detecting base-NPs using NER information, while Buyko et al.
NER is mentioned in 19 sentences in this paper.
Topics mentioned in this paper:
Saha, Sujan Kumar and Mitra, Pabitra and Sarkar, Sudeshna
Introduction
Named Entity Recognition ( NER ) involves locating and classifying the names in a text.
Introduction
NER is an important task, having applications in information extraction, question answering, machine translation and in most other Natural Language Processing (NLP) applications.
Introduction
NER systems have been developed for English and few other languages with high accuracy.
Maximum Entropy Based Model for Hindi NER
MaxEnt computes the probability p(0| h) for any 0 from the space of all possible outcomes 0, and for every h from the space of all possible histories H. In NER , history can be viewed as all information derivable from the training corpus relative to the current token.
Maximum Entropy Based Model for Hindi NER
The training data for the Hindi NER task is composed of about 243K words which is collected from the popular daily Hindi newspaper “Dainik Jagaran”.
NER is mentioned in 21 sentences in this paper.
Topics mentioned in this paper:
Lin, Dekang and Wu, Xiaoyun
Abstract
Our NER system achieves the best current result on the widely used CoNLL benchmark.
Conclusions
Our system achieved the best current result on the CoNLL NER data set.
Named Entity Recognition
Named entity recognition ( NER ) is one of the first steps in many applications of information extraction, information retrieval, question answering and other applications of NLP.
Named Entity Recognition
2001) is one of the most competitive NER algorithms.
Named Entity Recognition
The CoNLL 2003 Shared Task (Tjong Kim Sang and Meulder 2003) offered a standard experimental platform for NER .
NER is mentioned in 10 sentences in this paper.
Topics mentioned in this paper:
Tomanek, Katrin and Hahn, Udo
Active Learning for Sequence Labeling
This might, in particular, apply to NER where larger stretches of sentences do not contain any entity mention at all, or merely trivial instances of an entity class easily predictable by the current model.
Conditional Random Fields for Sequence Labeling
Many NLP tasks, such as POS tagging, chunking, or NER , are sequence labeling problems where a sequence of class labels 3] = (3/1,.
Experiments and Results
This coincides with the assumption that SeSAL works especially well for labeling tasks where some classes occur predominantly and can, in most cases, easily be discriminated from the other classes, as is the case in the NER scenario.
Introduction
tion ( NER ), the examples selected by AL are sequences of text, typically sentences.
Introduction
In the NER scenario, e.g., large portions of the text do not contain any target entity mention at all.
Introduction
Our experiments are laid out in Section 5 where we compare fully and semi-supervised AL for NER on two corpora, the newspaper selection of MUC7 and PENNBIOIE, a biological abstracts corpus.
Summary and Discussion
Our experiments in the context of the NER scenario render evidence to the hypothesis that the proposed approach to semi-supervised AL (SeSAL) for sequence labeling indeed strongly reduces the amount of tokens to be manually annotated — in terms of numbers, about 60% compared to its fully supervised counterpart (FuSAL), and over 80% compared to a totally passive learning scheme based on random selection.
Summary and Discussion
In our experiments on the NER scenario, those regions were mentions of entity names or linguistic units which had a surface appearance similar to entity mentions but could not yet be correctly distinguished by the model.
Summary and Discussion
Future research is needed to empirically investigate into this area and quantify the savings in terms of the time achievable with SeSAL in the NER scenario.
NER is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Yang, Fan and Zhao, Jun and Liu, Kang
Asymmetric Alignment Method for Equivalent Extraction
1) Traditional alignment method needs the NER process in both sides, but the NER process may often bring in some mistakes.
Asymmetric Alignment Method for Equivalent Extraction
The NER process is not necessary for that we align the Chinese ON with English sentences directly.
Experiments
The asymmetric alignment method can avoid the mistakes made in the NER process and give an explicit alignment matching.
Experiments
Our method can overcome the mistakes introduced in the NER process.
Introduction
However, the named entity recognition ( NER ) will always introduce some mistakes.
Introduction
In order to avoid NER mistakes, we propose an asymmetric alignment method which align the Chinese ON with an English sentence directly and then extract the English fragment with the largest alignment score as the equivalent.
Introduction
The asymmetric alignment method can avoid the influence of improper results of NER and generate an explicit matching between the source and the target phrases which can guarantee the precision of alignment.
NER is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Speriosu, Michael and Baldridge, Jason
Data
Table 1 gives statistics for both corpora, including the number and ambiguity of gold standard toponyms for both as well as NER identified to-
Data
ponyms for TR-CONLL.4 We use the pre-trained English NER from the OpenNLP project.5
Evaluation
False positives occur when the NER incorrectly predicts a toponym, and false negatives occur when it fails to predict a toponym identified by the annotator.
Results
In this case, the ORACLE results are less than 100% due to the limitations of the NER, and represent the best possible results given the NER we used.
Results
Results on TR-CONLL indicate much higher performance than the resolvers presented by Leidner (2008), whose F-scores do not exceed 36.5% with either gold or NER toponyms.7 TRC-TEST is a subset of the documents Leidner uses (he did not split development and test data), but the results still come from overlapping data.
Results
However, our evaluation is more penalized since SPIDER loses precision for NER’s false positives (Jack London as a location) while Leidner only evaluated on actual locations.
Toponym Resolvers
Given a set of toponyms provided via annotations or identified using NER , a resolver must select a candidate location for each toponym (or, in some cases, a resolver may abstain).
Toponym Resolvers
4States and countries are not annotated in CWAR, so we do not evaluate end-to-end using NER plus toponym resolution for it as there are many (falsely) false positives.
NER is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Richman, Alexander E. and Schone, Patrick
Abstract
In this paper, we describe a system by which the multilingual characteristics of Wikipedia can be utilized to annotate a large corpus of text with Named Entity Recognition ( NER ) tags requiring minimal human intervention and no linguistic expertise.
Abstract
language daut can be used u) bootstrap the NER process in other languages.
Introduction
Named Entity Recognition ( NER ) has long been a major task of natural language processing.
Training Data Generation
Our approach to multilingual NER is to pull back the decision-making process to English whenever possible, so that we could apply some level of linguistic expertise.
Wikipedia 2.1 Structure
The authors noted that their results would need to pass a manual supervision step before being useful for the NER task, and thus did not evaluate their results in the context of a full NER system.
Wikipedia 2.1 Structure
phrases to the classical NER tags (PERSON, LOCATION, etc.)
Wikipedia 2.1 Structure
For eXample, they used the sentence “Franz Fischler is an Austrian politician” to associate the label “politician” to the surface form “Franz Fischler.” They proceeded to show that the dictionaries generated by their method are useful when integrated into an NER system.
NER is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Celikyilmaz, Asli and Thint, Marcus and Huang, Zhiheng
Experiments
line represents a converted question, in order to extract the question-type feature, we use a matching NER-type between the headline and candidate sentence to set question-type NER match feature.
Experiments
From section 2.2, QTCF represents question-type NER match feature, LexSem is the bundle of lexico-semantic features and QComp is the matching features of subject, head, object, and three complements.
Feature Extraction for Entailment
Named-Entity Recognizer ( NER ): This component identifies and classifies basic entities such as proper names of person, organization, product, location; time and numerical expressions such as year, day, month; various measurements such as weight, money, percentage; contact information like address, webpage, phone-number, etc.
Feature Extraction for Entailment
The NER module is based on a combination of user defined rules based on Lesk word disambiguation (Lesk, 1988), WordNet (Miller, 1995) lookups, and many user-defined dictionary lookups, e.g.
Feature Extraction for Entailment
During the NER extraction, we also employ phrase analysis based on our phrase utility extraction method using Standford dependency parser ((Klein and Manning, 2003)).
NER is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Chambers, Nathanael
Experiments and Results
MaXEnt Time is the discriminative model with rich time features (but not NER) as described in Section 3.3.2 (Time+NER includes NER ).
Experiments and Results
7%, and adding NER by another 6%.
Timestamp Classifiers
However, we instead propose using NER labels to extract what may have counted as collocations in their data.
Timestamp Classifiers
We compare the NER features against the Unigram and Filtered NLLR models in our final experiments.
Timestamp Classifiers
We use the freely available Stanford Parser and NER system1 to generate the syntactic interpretation for these features.
NER is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Minkov, Einat and Zettlemoyer, Luke
Corporate Acquisitions
Table 5 further shows results on NER , the task of recovering the sets of named entity mentions pertaining to each target field.
Related Work
Interestingly, several researchers have attempted to model label consistency and high-level relational constraints using state-of-the-art sequential models of named entity recognition ( NER ).
Related Work
We will show that this approach yields better performance on the CMU seminar announcement dataset when evaluated in terms of NER .
Related Work
Our approach is complimentary to NER methods, as it can consolidate noisy overlapping predictions from multiple systems into coherent sets.
Seminar Extraction Task
We used a set of rules to extract candidate named entities per the types specified in Figure 2.4 The rules encode information typically used in NER , including content and contextual patterns, as well as lookups in available dictionaries (Finkel et al., 2005; Minkov et al., 2005).
Seminar Extraction Task
Lexical features of this form are commonly used in NER (Finkel et al., 2005; Minkov et al., 2005).
Seminar Extraction Task
(2005), applied sequential models to perform NER on this dataset, identifying named entities that pertain to the template slots.
NER is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Faruqui, Manaal and Dyer, Chris
Abstract
To evaluate our method, we use the word clusters in an NER system and demonstrate a statistically significant improvement in F1 score when using bilingual word clusters instead of monolingual clusters.
Conclusions
We have shown that improvement in clustering can be obtained across a range of language pairs, evaluated in terms of their value as features in an extrinsic NER task.
Experiments
Our evaluation task is the German corpus with NER annotation that was created for the shared task at CoNLL-2003 3.
Experiments
Table 1 shows the performance of NER when the word clusters are obtained using only the bilingual information for different language pairs.
Experiments
We varied the weight of the bilingual objective (/3) from 0.05 to 0.9 and observed the effect in NER performance on English-German language pair.
NER is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Tibshirani, Julie and Manning, Christopher D.
Experiments
We now consider the problem of named entity recognition ( NER ) to evaluate how our model performs in a large-scale prediction task.
Experiments
In traditional NER , the goal is to determine whether each word is a person, organization, location, or not a named entity (‘other’).
Experiments
For training, we use a large, noisy NER dataset collected by Jenny Finkel.
Introduction
In experiments on a large, noisy NER dataset, we find that this method can provide an improvement over standard logistic regression when annotation errors are present.
NER is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Tian, Zhenhua and Xiang, Hengheng and Liu, Ziqi and Zheng, Qinghua
RSP: A Random Walk Model for SP
Random confounder (RND) most closes to the realistic case; While nearest confounder ( NER ) is reproducible and it avoids frequency bias (Chambers and Jurafsky, 2010).
RSP: A Random Walk Model for SP
In this work, we employ both RND and NER confounders: 1) for RND, we randomly select
RSP: A Random Walk Model for SP
2) for NER , firstly we sort the arguments by their frequency.
NER is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Zhou, Deyu and Chen, Liangyu and He, Yulan
Experiments
Table 2: Comparison of the performance of event extraction using different NER method.
Experiments
We experimented with two approaches for named entity recognition ( NER ) in preprocessing.
Experiments
One is to use the NER tool trained specifically on the Twitter data (Ritter et al., 2011), denoted as “TW-NER” in Table 2.
Methodology
Named entity recognition ( NER ) is a crucial step since the results would directly impact the final extracted 4-tuple (y,d, l, It is not easy to accurately identify named entities in the Twitter data since tweets contain a lot of misspellings and abbreviations.
Methodology
First, a traditional NER tool such as the Stanford Named Entity Recognizer2 is used to identify named entities from the news articles crawled from BBC and CNN during the same period that the tweets were published.
NER is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Kim, Sungchul and Toutanova, Kristina and Yu, Hwanjo
Data and task
The Figure also shows the results of the Stanford NER tagger for English (Finkel et al., 2005) (we used the MUC-7 classifier).
Introduction
Named Entity Recognition ( NER ) is a frequently needed technology in NLP applications.
Introduction
State-of-the-art statistical models for NER typically require a large amount of training data and linguistic expertise to be sufficiently accurate, which makes it nearly impossible to build high-accuracy models for a large number of languages.
Introduction
Recently, there have been two lines of work which have offered hope for creating NER analyzers in many languages.
NER is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Huang, Ruihong and Riloff, Ellen
Introduction
Semantic class tagging has been the subject of previous research, primarily under the guises of named entity recognition ( NER ) and mention detection.
Related Work
Semantic class tagging is most closely related to named entity recognition ( NER ), mention detection, and semantic lexicon induction.
Related Work
NER systems (e.g., (Bikel et al., 1997; Collins and Singer, 1999; Cucerzan and Yarowsky, 1999; Fleischman and Hovy, 2002) identify proper named entities, such as people, organizations, and locations.
Related Work
Several bootstrapping methods for NER have been previously developed (e.g., (Collins and Singer, 1999; Niu et al., 2003)).
NER is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Abu-Jbara, Amjad and Dasigi, Pradeep and Diab, Mona and Radev, Dragomir
Approach
Table 3: Some of the entities identified using NER and NP Chunking in a discussion thread about the US 2012 elections
Approach
In addition to this shallow parsing method, we also use named entity recognition ( NER ) to identify more entities.
Approach
Now, both mentions of Obama will be recognized by the Stanford NER system and will be identified as one entity.
Evaluation
Although using both named entity recognition ( NER ) and noun phrase chunking achieves better results, it
Evaluation
can also be noted from the results that NER contributes more to the system performance.
NER is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Durrett, Greg and Hall, David and Klein, Dan
Experiments
We use the standard automatic parses and NER tags for each document.
Introduction
We evaluate our system on the dataset from the CoNLL 2011 shared task using three different types of properties: synthetic oracle properties, entity phi features (number, gender, animacy, and NER type), and properties derived from unsupervised clusters targeting semantic type information.
Models
Agreement features: Gender, number, animacy, and NER type of the current mention and the antecedent (separately and conjoined).
Models
Each mention 2' has been augmented with a single property node pi E {1, ..., The unary B factors encode prior knowledge about the setting of each pi; these factors may be hard (I will not refer to a plural entity), soft (such as a distribution over named entity types output by an NER tagger), or practically uniform (e. g. the last name Smith does not specify a particular gender).
NER is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Guo, Weiwei and Li, Hao and Ji, Heng and Diab, Mona
Creating Text-to-text Relations via Twitter/News Features
Directly applying Named Entity Recognition ( NER ) tools on news titles or
Creating Text-to-text Relations via Twitter/News Features
Accordingly, we first apply the NER tool on news summaries, then label named entities in the tweets in the same way as labeling the hashtags: if there is a string in the tweet that matches a named entity from the summaries, then it is labeled as a named entity in the tweet.
Creating Text-to-text Relations via Twitter/News Features
The noise introduced during automatic NER accumulates much faster given the large number of named entities in news data.
Introduction
Named entities acquired from a news document, typically with high accuracy using Named Entity Recognition [ NER ] tools, may be particularly informative.
NER is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Scheible, Christian and Schütze, Hinrich
Distant Supervision
Generally, the quality of NER is crucial in this task.
Features
As standard named entity recognition ( NER ) systems do not capture categories that are relevant to the movie domain, we opt for a lexicon-based approach similar to (Zhuang et al., 2006).
Features
Many entries are unsuitable for NER , e.g., dog is frequently listed as a character.
Features
This rule has precedence over NER, so if a name matches a labeled entity, we do not attempt to label it through NER .
NER is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Chiticariu, Laura and Krishnamurthy, Rajasekar and Li, Yunyao and Raghavan, Sriram and Reiss, Frederick and Vaithyanathan, Shivakumar
Experimental Evaluation
0 NER : named-entity recognition for Person, Organization, Location, Address, PhoneNumber, EmaiIAddress, URL and DateTime.
Experimental Evaluation
We chose NER primarily because named-entity recognition is a well-studied problem and standard datasets are available for evaluation.
Experimental Evaluation
3To the best of our knowledge, ANNIE (Cunningham et al., 2002) is the only publicly available NER library implemented in a grammar-based system (JAPE in GATE).
NER is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Szpektor, Idan and Dagan, Ido and Bar-Haim, Roy and Goldberger, Jacob
Contextual Preferences Models
We identify entity types using the default Lingpipe2 Named-Entity Recognizer ( NER ), which recognizes the types Location, Person and Organization.
Contextual Preferences Models
To construct cpv;n(r), we currently use a simple approach where each individual term in cpv;e(r) is analyzed by the NER system, and its type (if any) is added to ammo“).
Experimental Settings
The Contextual Preferences for h were constructed manually: the named-entity types for cpvm(h) were set by adapting the entity types given in the guidelines to the types supported by the Ling-pipe NER (described in Section 3.2).
NER is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Arnold, Andrew and Nallapati, Ramesh and Cohen, William W.
Conclusions, related & future work
Thus hierarchical priors seem a natural, effective and robust choice for transferring learning across NER datasets and tasks.
Introduction
Consider the task of named entity recognition ( NER ).
Introduction
In many NER problems, features are often constructed as a series of transformations of the input training data, performed in sequence.
NER is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: