Abstract | Our results show that RL significantly outperforms Supervised Learning when interacting in simulation as well as for interactions with real users. |
Conclusion | Our results show that RL significantly outperforms SL in simulation as well as in interactions with real users. |
Simulated Learning Environment | For learning presentation modality, both classifiers significantly outperform the baseline. |
Simulated Learning Environment | The results show that simulation-based RL with an environment bootstrapped from WOZ data allows learning of robust strategies which significantly outperform the strategies contained in the initial data set. |
Abstract | Experimental results on the NIST MT-2005 Chinese-English translation task show that our method statistically significantly outperforms the baseline systems. |
Experiments | 1) Our tree sequence-based model significantly outperforms (p < 0.01) previous phrase-based and linguistically syntax-based methods. |
Introduction | Experiment results on the NIST MT-2005 Chinese-English translation task show that our method significantly outperforms Moses (Koehn et al., 2007), a state-of-the-art phrase-based SMT system, and other linguistically syntax-based methods, such as SCFG-based and STSG-based methods (Zhang et al., 2007). |
Abstract | The evaluation results show that: (l) The pivot approach is effective in extracting paraphrase patterns, which significantly outperforms the conventional method DIRT. |
Conclusion | In addition, the log-linear model with the proposed feature functions significantly outperforms the conventional models. |
Introduction | Our experiments show that the pivot approach significantly outperforms conventional methods. |