Abstract | Finally, we compare our proposal with the state of the art estimators (both parametric and nonparametric) on large standard corpora; apart from showing the favorable performance of our estimator, we also see that the classical Good-Turing estimator consistently underestimates the vocabulary size. |
Conclusion | We then compared the performance of the proposed estimator with that of the state of the art estimators on large corpora. |
Experiments | In this study we consider state of the art parametric estimators, as surveyed by (Baroni and Evert, 2005). |
Introduction | While compared with other vocabulary size estimates, we see that our estimator performs at least as well as some of the state of the art estimators. |
Previous Work | A good survey of the state of the art is available in (Gandolfi and Sastri, 2004). |
Results and Discussion | 0 From the Figure 1, we see that our estimator compares quite favorably with the best of the state of the art estimators. |
Results and Discussion | The best of the state of the art estimator is a parametric one (ZM), while ours is a nonparametric estimator. |
Results and Discussion | Further, it compares very favorably to the state of the art estimators (both parametric and nonparametric). |
Abstract | Experiments show that our approach improves the state of the art . |
Conclusions and Future Work | Our empirical results improve the state of the art . |
Introduction | Experiments show that our approach improves the state of the art . |