Conclusion | Thus, the MAP goes up from 29.6% (best result on the balanced corpora) to 42.3% ( best result on the unbalanced corpora) in the breast cancer domain, and from 16.5% to 26.0% in the diabetes domain. |
Conclusion | Here, the MAP goes up from 42.3% (best result on the unbalanced corpora) to 46.9% ( best result on the unbalanced corpora with prediction) in the breast cancer domain, and from 26.0% to 29.8% in the diabetes domain. |
Experiments and Results | We chose the balanced corpora where the standard approach has shown the best results in the previous experiment, namely [breast cancer corpus 12] and [diabetes corpus 7]. |
Experiments and Results | We can see that the best results are obtained by the Sourcepred approach for both comparable corpora. |
Experiments and Results | We can also notice that the Balanced + Prediction approach slightly outperforms the baseline while the U nbalanced+ Prediction approach gives the best results . |
Abstract | Our model obtains the best results to date on recent shared task data for Arabic, Chinese, and English. |
Conclusion | We evaluated our system on all three languages from the CoNLL 2012 Shared Task and present the best results to date on these data sets. |
Introduction | We obtain the best results to date on these data sets.1 |
Results | In Table 2 we compare the results of the nonlocal system (This paper) to the best results from the CoNLL 2012 Shared Task.10 Specifically, this includes Fernandes et al.’s (2012) system for Arabic and English (denoted Fernandes), and Chen and Ng’s (2012) system for Chinese (denoted C&N). |
Extensions | Best result in each column in bold. |
Extensions | Best result in each column in bold. |
Extensions | Best result in each column in bold. |
Experiments and Results | The three best results and the median from TAC-KBP 2012 systems are shown in the remaining columns for the sake of comparison. |
Experiments and Results | We observe that the complete algorithm (co-references, named entity labels and MDP) provides the best results on PER NE links. |
Experiments and Results | On GPE and ORG entities, the simple application of MDP without prior corrections obtains the best results . |
Abstract | In our best result (on Assamese), our approach can predict 29% of the token-based out-of-vocabulary with a small amount of unlabeled training data. |
Evaluation | The best results (again, except for Pashto) are achieved using one of the three reranking methods (reranking by trigraph probabilities or morpheme boundaries) as opposed to doing no reranking. |
Introduction | In our best result (on Assamese), we show that our approach can predict 29% of the token-based out-of-vocabulary with a small amount of unlabeled training data. |
Experiments | The best result is achieved by combining DT and PSQ with DK and VW. |
Experiments | Borda voting gives the best result under MAP which is probably due to the adjustment of the interpolation parameter for MAP on the development set. |
Experiments | Under NDCG and PRES, LinLearn achieves the best results , showing the advantage of automatically learning combination weights that leads to stable results across various metrics. |
Discussion and conclusion | Automated configuration selection had positive results, yet the system with context size one and an L2 language model component often produces the best results . |
Experiments & Results | Here we observe that a context width of one yields the best results . |
Experiments & Results | This combination of a classifier with context size one and trigram-based language model proves to be most effective and reaches the best results so far. |
Experiments | Best result on |
Experiments | We experimented with various discriminative learners on DEV, including logistic regression, perceptron and SVM, and found L1 regularized logistic regression to give the best result . |
Experiments | The final F1 of 42.0% gives a relative improvement over previous best result (Berant et al., 2013) of 31.4% by one third. |