Index of papers in Proc. ACL 2014 that mention
  • significantly outperforms
Li, Zhenghua and Zhang, Min and Chen, Wenliang
Abstract
Experimental results on benchmark data show that our method significantly outperforms the baseline supervised parser and other entire-tree based semi-supervised methods, such as self-training, co-training and tri-training.
Experiments and Analysis
Using unlabeled data with the results of ZPar (“Unlabeled <— Z”) significantly outperforms the baseline GParser by 0.30% (93.15-82.85) on English.
Experiments and Analysis
However, we find that although the parser significantly outperforms the supervised GParser on English, it does not gain significant improvement over co-training with ZPar (“Unlabeled <— Z”) on both English and Chinese.
Experiments and Analysis
(2012) and Bohnet and Nivre (2012) use joint models for POS tagging and dependency parsing, significantly outperforming their pipeline counterparts.
significantly outperforms is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Liu, Changsong and She, Lanbo and Fang, Rui and Chai, Joyce Y.
Abstract
Our empirical results have shown the probabilistic labeling approach significantly outperforms a previous graph-matching approach for referential grounding.
Evaluation and Discussion
significantly outperforms state-space search (S.S.S.
Evaluation and Discussion
Although probabilistic labeling significantly outperforms the state-space search, the grounding performance is still rather poor (less than 50%)
Introduction
Our empirical results have shown that the probabilistic labeling approach significantly outperforms the state-space search approach in both grounding accuracy and efficiency.
significantly outperforms is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Wang, William Yang and Hua, Zhenhao
Abstract
In experiments, we show that our model significantly outperforms strong linear and nonlinear discriminative baselines on three datasets under various settings.
Conclusion
Focusing on the three financial crisis related datasets, the proposed model significantly outperform the standard linear regression method in statistics and strong discriminative support vector regression baselines.
Introduction
By varying different experimental settings on three datasets concerning different periods of the Great Recession from 2006-2013, we empirically show that our approach significantly outperforms the baselines by a wide margin.
Introduction
0 Our results significantly outperform standard linear regression and strong SVM baselines.
significantly outperforms is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Bollegala, Danushka and Weir, David and Carroll, John
Abstract
In both tasks, our method significantly outperforms competitive baselines and returns results that are statistically comparable to current state-of-the-art methods, while requiring no task-specific customisations.
Experiments and Results
Except for the DE setting in which Proposed method significantly outperforms both SFA and SCL, the performance of the Proposed method is not statistically significantly different to that of SFA or SCL.
Introduction
Without requiring any task specific customisations, systems based on our distribution prediction method significantly outperform competitive baselines in both tasks.
significantly outperforms is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Li, Qi and Ji, Heng
Abstract
Experiments on Automatic Content Extraction (ACE)1 corpora demonstrate that our joint model significantly outperforms a strong pipelined baseline, which attains better performance than the best-reported end-to-end system.
Conclusions and Future Work
Experiments demonstrated our approach significantly outperformed pipelined approaches for both tasks and dramatically advanced state-of-the-art.
Experiments
We can see that our approach significantly outperforms the pipelined approach for both tasks.
significantly outperforms is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Mehdad, Yashar and Carenini, Giuseppe and Ng, Raymond T.
Abstract
Automatic and manual evaluation results over meeting, chat and email conversations show that our approach significantly outperforms baselines and previous extractive models.
Experimental Setup
Results indicate that our system significantly outperforms baselines in overall quality and responsiveness, for both meeting and email datasets.
Introduction
Automatic evaluation on the chat dataset and manual evaluation over the meetings and emails show that our system uniformly and statistically significantly outperforms baseline systems, as well as a state-of-the-art query-based extractive summarization system.
significantly outperforms is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Morin, Emmanuel and Hazem, Amir
Experiments and Results
We can see that the Unbalanced approach significantly outperforms the baseline (Balanced).
Experiments and Results
We can also notice that the prediction model applied to the balanced corpus (Balanced + Prediction) slightly outperforms the baseline while the Unbalanced + Prediction approach significantly outperforms the three other approaches (moreover the variation observed with the Unbalanced approach are lower than the Unbalanced —|— Prediction approach).
Experiments and Results
As for the previous experiment, we can see that the Unbalanced approach significantly outperforms the Balanced approach.
significantly outperforms is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Yang, Bishan and Cardie, Claire
Experiments
that PR significantly outperforms all other baselines in both the CR dataset and the MD dataset (average accuracy across domains is reported).
Experiments
In contrast, both PR1“ and PR significantly outperform CRF, which implies that incorporating lexical and discourse constraints as posterior constraints is much more effective.
Experiments
We can see that both PR and Ple significantly outperform all other baselines in all domains.
significantly outperforms is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: