Index of papers in Proc. ACL 2012 that mention
  • cross-lingual
Meng, Xinfan and Wei, Furu and Liu, Xiaohua and Zhou, Ming and Xu, Ge and Wang, Houfeng
Abstract
Such a disproportion arouse interest in cross-lingual sentiment classification, which aims to conduct sentiment classification in the target language (e.g.
Abstract
In this paper, we propose a generative cross-lingual mixture model (CLMM) to leverage unlabeled bilingual parallel data.
Cross-Lingual Mixture Model for Sentiment Classification
In this section we present the cross-lingual mixture model (CLMM) for sentiment classification.
Cross-Lingual Mixture Model for Sentiment Classification
We first formalize the task of cross-lingual sentiment classification.
Introduction
In this paper we propose a cross-lingual mixture model (CLMM) for cross-lingual sentiment classification.
Introduction
Besides, CLMM can improve the accuracy of cross-lingual sentiment classification consistently regardless of whether labeled data in the target language are present or not.
Introduction
This paper makes two contributions: (1) we propose a model to effectively leverage large bilingual parallel data for improving vocabulary coverage; and (2) the proposed model is applicable in both settings of cross-lingual sentiment classification, irrespective of the availability of labeled data in the target language.
Related Work
In this section, we present a brief review of the related work on monolingual sentiment classification and cross-lingual sentiment classification.
Related Work
2.2 Cross-Lingual Sentiment Classification
Related Work
Cross-lingual sentiment classification, which aims to conduct sentiment classification in the target language (e. g. Chinese) with labeled data in the source
cross-lingual is mentioned in 16 sentences in this paper.
Topics mentioned in this paper:
Mehdad, Yashar and Negri, Matteo and Federico, Marcello
Abstract
Using a combination of lexical, syntactic, and semantic features to train a cross-lingual textual entailment system, we report promising results on different datasets.
Conclusion
Our results in different cross-lingual settings prove the feasibility of the approach, with significant state-of-the-art improvements also on RTE-derived data.
Experiments and results
2Recently, a new dataset including “Unknown” pairs has been used in the “Cross-Lingual Textual Entailment for Content Synchronization” task at SemEval—2012 (Negri et al., 2012).
Experiments and results
(3-way) demonstrates the effectiveness of our approach to capture meaning equivalence and information disparity in cross-lingual texts.
Experiments and results
Cross-lingual models also significantly outperform pivoting methods.
Introduction
In this paper we set such problem as an application-oriented, cross-lingual variant of the Textual Entailment (TE) recognition task (Dagan and Glickman, 2004).
Introduction
(a) Experiments with multidirectional cross-lingual textual entailment.
Introduction
So far, cross-lingual
cross-lingual is mentioned in 10 sentences in this paper.
Topics mentioned in this paper:
Kim, Seokhwan and Lee, Gary Geunbae
Abstract
To build a relation extractor without significant annotation effort, we can exploit cross-lingual annotation projection, which leverages parallel corpora as external resources for supervision.
Conclusions
Experimental results show that our graph-based projection helped to improve the performance of the cross-lingual annotation projection of the semantic relations, and our system outperforms the other systems, which incorporate monolingual external resources.
Cross-lingual Annotation Projection for Relation Extraction
Cross-lingual annotation projection intends to learn an extractor ft for good performance without significant effort toward building resources for a resource-poor target language L5.
Cross-lingual Annotation Projection for Relation Extraction
Early studies in cross-lingual annotation projection were accomplished for various natural language processing tasks (Yarowsky and Ngai, 2001; Yarowsky et al., 2001; Hwa et al., 2005; Zitouni and Florian, 2008; Pado and Lapata, 2009).
Introduction
To obtain training examples in the resource-poor target language, this approach exploited a cross-lingual annotation projection by propagating annotations that were generated by a relation extraction system in a resource-rich source language.
cross-lingual is mentioned in 5 sentences in this paper.
Topics mentioned in this paper: