Index of papers in Proc. ACL 2012 that mention
  • proposed models
Mukherjee, Arjun and Liu, Bing
Abstract
Our experimental results show that the two proposed models are indeed able to perform the task effectively.
Experiments
This section evaluates the proposed models .
Experiments
Even with this noisy automatically-labeled data, the proposed models can produce good results.
Experiments
However, it is important to note that the proposed models are flexible and do not need to have seeds for every aspect/topic.
Introduction
The proposed models are evaluated using a large number of hotel reviews.
Introduction
Experimental results show that the proposed models outperform the two baselines by large margins.
Related Work
We will show in Section 4 that the proposed models outperform it by a large margin.
proposed models is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Meng, Xinfan and Wei, Furu and Liu, Xiaohua and Zhou, Ming and Xu, Ge and Wang, Houfeng
Abstract
By fitting parameters to maximize the likelihood of the bilingual parallel data, the proposed model learns previously unseen sentiment words from the large bilingual parallel data and improves vocabulary coverage significantly.
Conclusion and Future Work
First, the proposed model can learn previously unseen sentiment words from large unlabeled data, which are not covered by the limited vocabulary in machine translation of the labeled data.
Experiment
Table 2 shows the accuracy of the baseline systems as well as the proposed model (CLMM).
Introduction
By “synchronizing” the generation of words in the source language and the target language in a parallel corpus, the proposed model can (1) improve vocabulary coverage by learning sentiment words from the unlabeled parallel corpus; (2) transfer polarity label information between the source language and target language using a parallel corpus.
Introduction
This paper makes two contributions: (1) we propose a model to effectively leverage large bilingual parallel data for improving vocabulary coverage; and (2) the proposed model is applicable in both settings of cross-lingual sentiment classification, irrespective of the availability of labeled data in the target language.
proposed models is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Hatori, Jun and Matsuzaki, Takuya and Miyao, Yusuke and Tsujii, Jun'ichi
Model
4.2 Baseline and Proposed Models
Model
We use the following baseline and proposed models for evaluation.
Model
Figure 2 shows the F1 scores of the proposed model (SegTagDep) on CTB-Sc-l with respect to the training epoch and different parsing feature weights, where “Seg”, “Tag”, and “Dep” respectively denote the F1 scores of word segmentation, POS tagging, and dependency parsing.
proposed models is mentioned in 4 sentences in this paper.
Topics mentioned in this paper: