Index of papers in Proc. ACL 2009 that mention
  • model training
Li, Mu and Duan, Nan and Zhang, Dongdong and Li, Chi-Ho and Zhou, Ming
Collaborative Decoding
Model training .
Collaborative Decoding
2.5 Model Training
Collaborative Decoding
Model training for co-decoding
Experiments
The language model used for all models (include decoding models and system combination models described in Section 2.6) is a 5-gram model trained with the English part of bilingual data and xinhua portion of LDC English Giga-word corpus version 3.
Experiments
We parsed the language model training data with Berkeley parser, and then trained a dependency language model based on the parsing output.
model training is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Wu, Hua and Wang, Haifeng
Conclusion
9We used about 70k sentence pairs for CE model training , while Wang et a1.
Conclusion
(2008) used about 100k sentence pairs, a CE translation dictionary and more monolingual corpora for model training .
Experiments
Table 2 describes the data used for model training in this paper, including the BTEC (Basic Travel Expression Corpus) Chinese-English (CE) corpus and the BTEC English-Spanish (ES) corpus provided by IWSLT 2008 organizers, the HIT olympic CE corpus (2004-863-008)1 and the Europarl ES corpusz.
Experiments
Here, we used the synthetic CE Olympic corpus to train a model, which was interpolated with the CE model trained with both the BTEC CE1 corpus and the synthetic BTEC corpus to obtain an interpolated CE translation model.
model training is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Jiang, Wenbin and Huang, Liang and Liu, Qun
Experiments
While in the subtable below, JST F1 is also undefined since the model trained on PD gives a POS set different from that of CTB.
Experiments
We also see that for both segmentation and Joint S&T, the performance sharply declines when a model trained on PD is tested on CTB (row 2 in each subtable).
Experiments
This obviously fall behind those of the models trained on CTB itself (row 3 in each subtable), about 97% F1, which are used as the baselines of the following annotation adaptation experiments.
model training is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Niu, Zheng-Yu and Wang, Haifeng and Wu, Hua
Experiments of Parsing
Models Training data (%) (%) (%) GP CTB 79.9 82.2 81.0 RP CTB 82.0 84.6 83.3
Experiments of Parsing
All the sentences LR LP F Models Training data (%) (%) (%)
Experiments of Parsing
LR LP F Models Training data (%) (%) (%)
model training is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Pervouchine, Vladimir and Li, Haizhou and Lin, Bo
Experiments
Figure 6: Mean reciprocal ratio on Xinhua test set vs. alignment entropy and F-score for models trained with different affinity alignments.
Experiments
Figure 7: Mean reciprocal ratio on Xinhua test set vs. alignment entropy and F-score for models trained with different phonological alignments.
Related Work
Although the direct orthographic mapping approach advocates a direct transfer of grapheme at runtime, we still need to establish the grapheme correspondence at the model training stage, when phoneme level alignment can help.
model training is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: