Index of papers in Proc. ACL 2009 that mention
  • cross-lingual
Gao, Wei and Blitzer, John and Zhou, Ming and Wong, Kam-Fai
Features and Similarities
Our feature space consists of both these standard monolingual features and cross-lingual similarities among documents.
Features and Similarities
The cross-lingual similarities are valuated using different translation mechanisms, e.g., dictionary-based translation or machine translation, or even without any translation at all.
Features and Similarities
4.2 Cross-lingual Document Similarities
Introduction
Our problem setting differs from cross-lingual web search, where the goal is to return machine-translated results from one language in response to a query from another (Lavrenko et al., 2002).
Learning to Rank Using Bilingual Information
To allow for cross-lingual information, we extend the order of individual documents into that of bilingual document pairs: given two bilingual document pairs, we will write (6(1),C§-1)) > (eg2),c§2)) to indi-
Learning to Rank Using Bilingual Information
The advantages are twofold: (l) we can treat multiple cross-lingual document similarities the same way as the commonly used query-document features in a uniform manner of learning; (2) with the similarities, the relevance estimation on bilingual document pairs can be enhanced, and this in return can improve the ranking of documents.
Learning to Rank Using Bilingual Information
,n. Based on that, we produce a set of bilingual ranking instances 8 = {<I>ij, zij}, where each <I>ij = {xi;yj;sij} is the feature vector of (6,, cj) consisting of three components: xi = f (qe, 61-) is the vector of monolingual relevancy features of 6,, yi = f (qc, cj) is the vector of monolingual relevancy features of 03-, and sij = sim(ei, 03-) is the vector of cross-lingual similarities between ei and 03-, and zij 2 (C(61), C(03)) is the corresponding click counts.
cross-lingual is mentioned in 14 sentences in this paper.
Topics mentioned in this paper:
Parton, Kristen and McKeown, Kathleen R. and Coyne, Bob and Diab, Mona T. and Grishman, Ralph and Hakkani-Tür, Dilek and Harper, Mary and Ji, Heng and Ma, Wei Yun and Meyers, Adam and Stolbach, Sara and Sun, Ang and Tur, Gokhan and Xu, Wei and Yaman, Sibel
Abstract
Cross-lingual tasks are especially difficult due to the compounding effect of errors in language processing and errors in machine translation (MT).
Abstract
In this paper, we present an error analysis of a new cross-lingual task: the SW task, a sentence-level understanding task which seeks to return the English 5W's (Who, What, When, Where and Why) corresponding to a Chinese sentence.
Abstract
The best cross-lingual 5W system was still 19% worse than the best monolingual 5W system, which shows that MT significantly degrades sentence-level understanding.
Introduction
Cross-lingual applications address this need by presenting information in the speaker’s language even when it originally appeared in some other language, using machine
Introduction
In this paper, we present an evaluation and error analysis of a cross-lingual application that we developed for a government-sponsored evaluation, the 5 W task.
Introduction
In this paper, we address the cross-lingual 5 W task: given a source-language sentence, return the 5W’s translated (comprehensibly) into the target language.
Prior Work
The cross-lingual 5W task is closely related to cross-lingual information retrieval and cross-lingual question answering (Wang and Card 2006; Mitamura et al.
Prior Work
In cross-lingual information extraction (Sudo et al.
cross-lingual is mentioned in 19 sentences in this paper.
Topics mentioned in this paper:
Wan, Xiaojun
Abstract
This paper focuses on the problem of cross-lingual sentiment classification, which leverages an available English corpus for Chinese sentiment classification by using the English corpus as training data.
Conclusion and Future Work
In this paper, we propose to use the co-training approach to address the problem of cross-lingual sentiment classification.
Introduction
In this study, we focus on the problem of cross-lingual sentiment classification, which leverages only English training data for supervised sentiment classification of Chinese product reviews, without using any Chinese resources.
Related Work 2.1 Sentiment Classification
In this study, we focus on improving the corpus-based method for cross-lingual sentiment classification of Chinese product reviews by developing novel approaches.
Related Work 2.1 Sentiment Classification
Cross-domain text classification can be considered as a more general task than cross-lingual sentiment classification.
Related Work 2.1 Sentiment Classification
In particular, several previous studies focus on the problem of cross-lingual text classification, which can be considered as a special case of general cross-domain text classification.
The Co-Training Approach
In the context of cross-lingual sentiment classification, each labeled English review or unlabeled Chinese review has two views of features: English features and Chinese features.
cross-lingual is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Pervouchine, Vladimir and Li, Haizhou and Lin, Bo
Conclusions
We believe that this property would be useful in transliteration extraction, cross-lingual information retrieval applications.
Related Work
Denoting the number of cross-lingual mappings that are common in both A and Q as CA0, the number of cross-lingual mappings in A as CA and the number of cross-lingual mappings in Q as Cg, precision Pr is given as CAglCA, recall Be as GAO/CG and F-score as 2P7“ - Rc/(Pr + Re).
Transliteration alignment entropy
We expect a good alignment to have a sharp cross-lingual mapping with low alignment entropy.
cross-lingual is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Snyder, Benjamin and Naseem, Tahira and Barzilay, Regina
Introduction
One of the main challenges of unsupervised multilingual learning is to exploit cross-lingual patterns discovered in data, while still allowing a wide range of language-specific idiosyncrasies.
Introduction
For each pair of coupled bilingual constituents, a pair of part-of-speech sequences are drawn jointly from a cross-lingual distribution.
Related Work
Research in this direction was pioneered by (Wu, 1997), who developed Inversion Transduction Grammars to capture cross-lingual grammar variations such as phrase re-orderings.
cross-lingual is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: