Index of papers in Proc. ACL 2014 that mention
  • significant improvements
Lu, Shixiang and Chen, Zhenbiao and Xu, Bo
Abstract
On two Chinese-English tasks, our semi-supervised DAE features obtain statistically significant improvements of l.34/2.45 (IWSLT) and 0.82/1.52 (NIST) BLEU points over the unsupervised DBN features and the baseline features, respectively.
Conclusions
The results also demonstrate that DNN (DAE and HCDAE) features are complementary to the original features for SMT, and adding them together obtain statistically significant improvements of 3.16 (IWSLT) and 2.06 (NIST) BLEU points over the baseline features.
Experiments and Results
Adding new DNN features as extra features significantly improves translation accuracy (row 2-17 vs. 1), with the highest increase of 2.45 (IWSLT) and 1.52 (NIST) (row 14 vs. 1) BLEU points over the baseline features.
Experiments and Results
Also, adding more input features (X vs. X1) not only significantly improves the performance of DAE feature learning, but also slightly improves the performance of DBN feature learning.
Introduction
To address the first shortcoming, we adapt and extend some simple but effective phrase features as the input features for new DNN feature leam-ing, and these features have been shown significant improvement for SMT, such as, phrase pair similarity (Zhao et al., 2004), phrase frequency, phrase length (Hopkins and May, 2011), and phrase generative probability (Foster et al., 2010), which also show further improvement for new phrase feature learning in our experiments.
Introduction
Our semi-supervised DAE features significantly outperform the unsupervised DBN features and the baseline features, and our introduced input phrase features significantly improve the performance of DAE feature
significant improvements is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Duan, Manjuan and White, Michael
Background
To improve word ordering decisions, White & Rajkumar (2012) demonstrated that incorporating a feature into the ranker inspired by Gibson’s (2000) dependency locality theory can deliver statistically significant improvements in automatic evaluation scores, better match the distributional characteristics of sentence orderings, and significantly reduce the number of serious ordering errors (some involving vicious ambiguities) as confirmed by a targeted human evaluation.
Introduction
With the SVM reranker, we obtain a significant improvement in BLEU scores over
Reranking with SVMs 4.1 Methods
tures and the n-best parse features contributed to achieving a significant improvement compared to the perceptron model.
Reranking with SVMs 4.1 Methods
The complete model, BBS+dep+nbest, achieved a BLEU score of 88.73, significantly improving upon the perceptron model (p < 0.02).
Simple Reranking
However, as shown in Table 2, none of the parsers yielded significant improvements on the top of the perceptron model.
significant improvements is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Salameh, Mohammad and Cherry, Colin and Kondrak, Grzegorz
Abstract
We investigate this technique in the context of English-to-Arabic and English-to-Finnish translation, showing significant improvements in translation quality over desegmentation of l-best decoder outputs.
Conclusion
We have also applied our approach to English-to-Finnish translation, and although segmentation in general does not currently help, we are able to show significant improvements over a 1-best desegmentation baseline.
Introduction
We demonstrate that significant improvements in translation quality can be achieved by training a linear model to re-rank this transformed translation space.
Results
In fact, even with our lattice desegmenter providing a boost, we are unable to see a significant improvement over the unsegmented model.
Results
Nonetheless, the 1000-best and lattice desegmenters both produce significant improvements over the 1-best desegmentation baseline, with Lattice Deseg achieving a 1-point improvement in TER.
significant improvements is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Cui, Lei and Zhang, Dongdong and Liu, Shujie and Chen, Qiming and Li, Mu and Zhou, Ming and Yang, Muyun
Abstract
Experimental results show that our method significantly improves translation accuracy in the NIST Chinese-to-English translation task compared to a state-of-the-art baseline.
Conclusion and Future Work
It is a significant improvement over the state-of-the-art Hiero system, as well as a conventional LDA-based method.
Introduction
Experimental results demonstrate that our model significantly improves translation
Related Work
They incorporated the bilingual topic information into language model adaptation and lexicon translation model adaptation, achieving significant improvements in the large-scale evaluation.
significant improvements is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Li, Junhui and Marton, Yuval and Resnik, Philip and Daumé III, Hal
Abstract
Experiments on Chinese-English translation show that the reordering approach can significantly improve a state-of-the-art hierarchical phrase-based translation system.
Conclusion and Future Work
Experiments on Chinese-English translation show that the reordering approach can significantly improve a state-of-the-art hierarchical phrase-based translation system.
Discussion
We clearly see that using gold syntactic reordering types significantly improves the performance (e.g., 34.9 vs. 33.4 on average) and there is still some room for improvement by building a better maximum entropy classifiers (e.g., 34.9 vs. 34.3).
significant improvements is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Li, Zhenghua and Zhang, Min and Chen, Wenliang
Experiments and Analysis
Using unlabeled data with the results of Berkeley Parser (“Unlabeled <— B”) significantly improves parsing accuracy by 0.55% (93.40-92.85) on English and 1.06% (83.34-82.28) on Chinese.
Experiments and Analysis
However, we find that although the parser significantly outperforms the supervised GParser on English, it does not gain significant improvement over co-training with ZPar (“Unlabeled <— Z”) on both English and Chinese.
Introduction
All above work leads to significant improvement on parsing accuracy.
significant improvements is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Ng, Jun-Ping and Chen, Yan and Kan, Min-Yen and Li, Zhoujun
Experiments and Results
A statistically significant improvement of 4.1% is obtained with the use of all three features over SWING.
Experiments and Results
to guide the use of timelines such that significant improvements in R-2 over SWING are obtained.
Introduction
Compared to a competitive baseline, significant improvements of up to 4.1% are obtained.
significant improvements is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Shen, Mo and Liu, Hongxiao and Kawahara, Daisuke and Kurohashi, Sadao
Abstract
Through experiments, we demonstrate that by introducing character-level POS information, the performance of a baseline morphological analyzer can be significantly improved .
Evaluation
The results show that, while the differences between the baseline model and the proposed model in word segmentation accuracies are small, the proposed model achieves significant improvement in the experiment of joint segmentati-
Introduction
Through experiments, we demonstrate that by introducing character-level POS information, the performance of a baseline morphological analyzer can be significantly improved .
significant improvements is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Tu, Mei and Zhou, Yu and Zong, Chengqing
Abstract
The experimental results show that significant improvements are achieved on various test data meanwhile the translations are more cohesive and smooth.
Conclusion
The experimental results show that significant improvements have been achieved on various test data, meanwhile the translations are more cohesive and smooth, which together demonstrate the effectiveness of our proposed models.
Related Work
To the best of our knowledge, our work is the first attempt to exploit the source functional relationship to generate the target transitional expressions for grammatical cohesion, and we have successfully incorporated the proposed models into an SMT system with significant improvement of BLEU metrics.
significant improvements is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
van Schijndel, Marten and Elsner, Micha
Discussion
Training significantly improves role labelling in the case of object-extractions, which improves the overall accuracy of the model.
Evaluation
This slight, though significant in Eve, deficit is counterbalanced by a very substantial and significant improvement in object-extraction labelling accuracy.
Evaluation
Similarly, training confers a large and significant improvement for role assignment in wh-relative constructions, but it yields less of an improvement for that-relative constructions.
significant improvements is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: