Evaluation of NE Recognition | From the table we observe that the best result is obtained when k is 100. |
Evaluation of NE Recognition | Similarly when we deal with all the words in the corpus (17,465 words), we got best results when the words are clustered into 1100 clusters. |
Evaluation of NE Recognition | the best result is obtained when important words for two preceding and two following positions (defined in Section 4.3) are selected. |
Maximum Entropy Based Model for Hindi NER | While experimenting with static word features, we have observed that a window of previous and next two words (wi_2...wi+2) gives best result (69.09) using the word features only. |
Word Clustering | The value of k (number of clusters) was varied till the best result is obtained. |
Discussion | means that the best configuration for PP-attachment does not always produce the best results for parsing |
Results | The SFU representation produces the best results for Bikel (F-score 0.010 above baseline), while for Charniak the best performance is obtained with word+SF (F-score 0.007 above baseline). |
Results | For both parsers the best results are achieved with SFU, which was also the best configuration for parsing with Bikel. |
Results | Comparing the semantic representations, the best results are achieved with SFU, as we saw in the gold-standard PP-attachment case. |
Conclusions, Summary and Future Work | the achievements of four different standard ML methods, to the goal of achieving the best results , as opposed to the other systems that mainly focused on one ML method, each. |
Experiments | Specifically, the AlWC_osWC feature variant achieves the best result with 87.75% accuracy. |
Experiments | Table 2 shows that SVM achieved the best result with 96.09% accuracy. |