Index of papers in Proc. ACL 2013 that mention
  • proposed models
Tamura, Akihiro and Watanabe, Taro and Sumita, Eiichiro and Takamura, Hiroya and Okumura, Manabu
Bilingual Infinite Tree Model
Specifically, the proposed model introduces bilingual observations by embedding the aligned target words in the source-side dependency trees.
Bilingual Infinite Tree Model
Note that POSS of target words are assigned by a POS tagger in the target language and are not inferred in the proposed model .
Discussion
Table 2 shows the number of the IPA POS tags used in the experiments and the POS tags induced by the proposed models .
Discussion
These examples show that the proposed models can disambiguate POS tags that have different functions in English, whereas the IPA POS tagset treats them jointly.
Experiment
We tested our proposed models under the NTCIR-9 Japanese-to-English patent translation task (Goto et al., 2011), consisting of approximately 3.2 million bilingual sentences.
Experiment
The results show that the proposed models can generate more favorable POS tagsets for SMT than an existing POS tagset.
Related Work
In the following, we overview the infinite tree model, which is the basis of our proposed model .
Related Work
model (Finkel et al., 2007), where children are dependent only on their parents, used in our proposed modell .
proposed models is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Wang, Kun and Zong, Chengqing and Su, Keh-Yih
Abstract
Unlike previous multistage pipeline approaches, which directly merge TM result into the final output, the proposed models refer to the corresponding TM information associated with each phrase at SMT decoding.
Abstract
Besides, the proposed models also outperform previous approaches significantly.
Conclusion and Future Work
In addition, all possible TM target phrases are kept and the proposed models select the best one during decoding via referring SMT information.
Experiments
To estimate the probabilities of proposed models , the corresponding phrase segmentations for bilingual sentences are required.
Experiments
In order to compare our proposed models with previous work, we re-implement two XML-Markup approaches: (Koehn and Senellart, 2010) and (Ma et al, 2011), which are denoted as Koehn-10 and Mall, respectively.
Experiments
More importantly, the proposed models achieve much better TER score than the TM system does at interval [0.9, 1.0), but Koehn-10 does not even exceed the TM system at this interval.
Introduction
Furthermore, the proposed models significantly outperform previous pipeline approaches.
proposed models is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Feng, Minwei and Peter, Jan-Thorsten and Ney, Hermann
Conclusion
By adding an unaligned word tag, the unaligned word phenomenon is automatically implanted in the proposed model .
Conclusion
We also show that the proposed model is able to improve a very strong baseline system.
Experiments
For the proposed model , significance testing results on both BLEU and TER are reported (B2 and B3 compared to B1, T2 and T3 compared to T1).
Experiments
Our proposed model ranks the second position.
Introduction
Section 3 describes the proposed model .
proposed models is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Zeng, Xiaodong and Wong, Derek F. and Chao, Lidia S. and Trancoso, Isabel
Abstract
Empirical results on Chinese tree bank (CTB-7) and Microsoft Research corpora (MSR) reveal that the proposed model can yield better results than the supervised baselines and other competitive semi-supervised CRFs in this task.
Introduction
Experiments on the data from the Chinese tree bank (CTB-7) and Microsoft Research (MSR) show that the proposed model results in significant improvement over other comparative candidates in terms of F-score and out-of-vocabulary (OOV) recall.
Method
5.2 Baseline and Proposed Models
Method
The proposed model will also be compared with the semi-supervised pipeline S&T model described in (Wang et al., 2011).
proposed models is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Dasgupta, Tirthankar
The Proposed Approaches 3.1 The psycholinguistic experiments
However, the proposed model still fails to predict processing of around 32% of words.
The Proposed Approaches 3.1 The psycholinguistic experiments
The evaluation of the proposed model returns an accuracy of 76% which comes to be 8% better than the preceding models.
The Proposed Approaches 3.1 The psycholinguistic experiments
We believe much more rigorous experiments are needed to be performed in order to validate our proposed models .
proposed models is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Goto, Isao and Utiyama, Masao and Sumita, Eiichiro and Tamura, Akihiro and Kurohashi, Sadao
Experiment
4.2 Training for the Proposed Models
Introduction
The proposed models are the pair model and the sequence model.
Proposed Method
Then, we describe two proposed models : the pair model and the sequence model that is the further improved model.
proposed models is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Hagiwara, Masato and Sekine, Satoshi
Abstract
The experiments on Japanese and Chinese WS have shown that the proposed models achieve significant improvement over state-of-the-art, reducing 16% errors in Japanese.
Experiments
Table 3 shows the result of the proposed models and major open-source Japanese WS systems, namely, MeCab 0.98 (Kudo et al., 2004), JUMAN 7.0 (Kurohashi and Nagao, 1994),
Experiments
Here, MeCab+UniDic achieved slightly better Katakana WS than the proposed models .
proposed models is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Yang, Nan and Liu, Shujie and Li, Mu and Zhou, Ming and Yu, Nenghai
Experiments and Results
We train our proposed model from results of classic HMM and IBM model 4 separately.
Experiments and Results
It can be seen from Table l, the proposed model consistently outperforms its corresponding baseline whether it is trained from alignment of classic HMM or IBM model 4.
Experiments and Results
The second row and fourth row show results of the proposed model trained from HMM and IBM4 respectively.
proposed models is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: