Index of papers in Proc. ACL 2014 that mention
  • baseline system
Saluja, Avneesh and Hassan, Hany and Toutanova, Kristina and Quirk, Chris
Evaluation
Further examination of the differences between the two systems yielded that most of the improvements are due to better bigrams and trigrams, as indicated by the breakdown of the BLEU score precision per n-gram, and primarily leverages higher quality generated candidates from the baseline system .
Evaluation
We experimented with two extreme setups that differed in the data assumed parallel, from which we built our baseline system , and the data treated as monolingual, from which we built our source and target graphs.
Evaluation
In the second setup, we train a baseline system using the data in Table 2, augmented with the noisy parallel text:
Generation & Propagation
Instead, by intelligently expanding the target space using linguistic information such as morphology (Toutanova et al., 2008; Chahuneau et al., 2013), or relying on the baseline system to generate candidates similar to self-training (McClosky et al., 2006), we can tractably propose novel translation candidates (white nodes in Fig.
Generation & Propagation
To generate new translation candidates using the baseline system , we decode each unlabeled source bigram to generate its m-best translations.
Generation & Propagation
The generated candidates for the unlabeled phrase — the ones from the baseline system’s
baseline system is mentioned in 13 sentences in this paper.
Topics mentioned in this paper:
Jia, Zhongye and Zhao, Hai
Experiments
4.3 Baseline System without Typo Correction
Experiments
Firstly we build a baseline system without typo correction which is a pipeline of pinyin syllable segmentation and PTC conversion.
Experiments
The baseline system takes a pinyin input sequence, segments it into syllables, and then converts it to Chinese character sequence.
baseline system is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Liu, Le and Hong, Yu and Liu, Hao and Wang, Xing and Yao, Jianmin
Experiments
4.4 Baseline Systems
Experiments
As described above, by using the NiuTrans toolkit, we have built two baseline systems to fulfill “863” SLT task in our experiments.
Experiments
These two baseline systems are equipped with the same language model which is trained on large-scale monolingual target language corpus.
baseline system is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Ma, Xuezhe and Xia, Fei
Experiments
Table 3 and Table 4 shows the parsing results of our approach, together with the results of the baseline systems and the oracle, on version 1.0 and version 2.0 of Google Universal Treebanks, respectively.
Experiments
Our approaches significantly outperform all the baseline systems across all the seven target languages.
Experiments
to those five baseline systems and the oracle (OR).
baseline system is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Salloum, Wael and Elfardy, Heba and Alamir-Salloum, Linda and Habash, Nizar and Diab, Mona
MT System Selection
For baseline system selection, we use the classification decision of Elfardy and Diab (2013)’s sentence-level dialect identification system to decide on the target MT system.
MT System Selection
baseline systems .
MT System Selection
The first part of Table 2 repeats the best baseline system and the four-system oracle combination from Table l for convenience.
Machine Translation Experiments
In this section, we present our MT experimental setup and the four baseline systems we built, and we evaluate their performance and the potential of their combination.
baseline system is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Xiao, Tong and Zhu, Jingbo and Zhang, Chunliang
A Skeleton-based Approach to MT 2.1 Skeleton Identification
For language modeling, lm is the standard n-gram language model adopted in the baseline system .
Evaluation
Row s-space of Table 1 shows the BLEU and TER results of restricting the baseline system to the space of skeleton-consistent derivations, i.e., we remove both the skeleton-based translation model and language model from the SBMT system.
Evaluation
We see that the limited search space is a little harmful to the baseline system .
Evaluation
Further, we regarded skeleton-consistent derivations as an indicator feature and introduced it into the baseline system .
baseline system is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Liu, Shujie and Yang, Nan and Li, Mu and Zhou, Ming
Experiments and Results
We compare our phrase pair embedding methods and our proposed RZNN with baseline system , in Table 2.
Experiments and Results
We can see that, our RZNN models with WEPPE and TCBPPE are both better than the baseline system .
Introduction
We conduct experiments on a Chinese-to-English translation task to test our proposed methods, and we get about 1.5 BLEU points improvement, compared with a state-of-the-art baseline system .
baseline system is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Qian, Longhua and Hui, Haotian and Hu, Ya'nan and Zhou, Guodong and Zhu, Qiaoming
Abstract
Section 2 reviews the previous work on relation extraction while Section 3 describes our baseline systems .
Abstract
3 Baseline Systems
Abstract
Particularly, SL—MO is used as the baseline system against which deficiency scores for other methods are computed.
baseline system is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: