Index of papers in Proc. ACL 2011 that mention
  • model trained
Cai, Peng and Gao, Wei and Zhou, Aoying and Wong, Kam-Fai
Evaluation
Adaptation takes place when ranking tasks are performed by using the models trained on the domains in which they were originally defined to rank the documents in other domains.
Introduction
This motivated the popular domain adaptation solution based on instance weighting, which assigns larger weights to those transferable instances so that the model trained on the source domain can adapt more effectively to the target domain (Jiang and Zhai, 2007).
Related Work
In (Geng et al., 2009; Chen et al., 2008b), the parameters of ranking model trained on the source domain was adjusted with the small set of labeled data in the target domain.
model trained is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Chen, David and Dolan, William
Experiments
Overall, all the trained models produce reasonable paraphrase systems, even the model trained on just 28K single parallel sentences.
Experiments
Examples of the outputs produced by the models trained on single parallel sentences and on all parallel sentences are shown in Table 2.
Experiments
We randomly selected 200 source sentences and generated 2 paraphrases for each, representing the two extremes: one paraphrase produced by the model trained with single parallel sentences, and the other by the model trained with all parallel sentences.
model trained is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Khapra, Mitesh M. and Joshi, Salil and Chatterjee, Arindam and Bhattacharyya, Pushpak
Abstract
We then use bilingual bootstrapping, wherein, a model trained using the seed annotated data of L1 is used to annotate the untagged data of L2 and vice versa using parameter projection.
Bilingual Bootstrapping
repeat 61 2: model trained using LDl 62 2: model trained using LD2
Bilingual Bootstrapping
repeat 61 2: model trained using LDl 62 2: model trained using LD2 for all ul E UD1 do 3 2: sense assigned by 61 to ul if confidence( S) > 6 then LD1 2= LD1 + U1 U D1 2: U D1 - U1 end if end for
model trained is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Lu, Bin and Tan, Chenhao and Cardie, Claire and K. Tsou, Benjamin
A Joint Model with Unlabeled Parallel Text
When 11 is 0, the algorithm ignores the unlabeled data and degenerates to two MaXEnt models trained on only the labeled data.
A Joint Model with Unlabeled Parallel Text
Train two initial monolingual models Train and initialize 61(0) and 62(0) on the labeled data 2.
Results and Analysis
When 11 is set to 0, the joint model degenerates to two MaXEnt models trained with only the labeled data.
model trained is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Tan, Ming and Zhou, Wenli and Zheng, Lei and Wang, Shaojun
Experimental results
For composite 5-gramfl’LSA model trained on 1.3 billion tokens corpus, 400 cores have to be used to keep top 5 most likely topics.
Experimental results
gramfl’LSA model trained on 44M tokens corpus, the computation time increases drastically with less than 5% percent perplexity improvement.
Experimental results
Its decoder uses a trigram language model trained with modified Kneser—Ney smoothing (Kneser and Ney, 1995) on a 200 million tokens corpus.
model trained is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Zarriess, Sina and Cahill, Aoife and Kuhn, Jonas
Abstract
We show that with an appropriately underspecified input, a linguistically informed realisation model trained to regenerate strings from the underlying semantic representation achieves 91.5% accuracy (over a baseline of 82.5%) in the prediction of the original voice.
Experiments
In Table 4, we report the performance of ranking models trained on the different feature subsets introduced in Section 4.
Experiments
The union of the features corresponds to the model trained on SEMh in Experiment 1.
model trained is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: