Index of papers in Proc. ACL 2010 that mention
  • word pairs
Liu, Shujie and Li, Chi-Ho and Zhou, Ming
Basics of ITG
From the viewpoint of word alignment, the terminal unary rules provide the links of word pairs , whereas the binary rules represent the reordering factor.
Basics of ITG
For instance, there are two parses for three consecutive word pairs , viz.
Basics of ITG Parsing
The base step applies all relevant terminal unary rules to establish the links of word pairs .
Basics of ITG Parsing
The word pairs are then combined into span pairs in all possible ways.
Introduction
HMM); Zhang and Gildea (2005) propose Tic-tac-toe pruning, which is based on the Model 1 probabilities of word pairs inside and outside a pair of spans.
The DITG Models
The following features about alignment link are used in W-DITG: 1) Word pair translation probabilities trained from HMM model (Vogel, et.al., 1996) and IBM model 4 (Brown et.al., 1993; Och and Ney, 2000).
The DPDI Framework
In the base step, only the word pairs listed in sentence-level annotation are inserted in the hypergraph, and the re-cursive steps are just the same as usual.
The DPDI Framework
Zhang and Gildea (2005) show that Model 1 (Brown et al., 1993; Och and Ney., 2000) probabilities of the word pairs inside and outside a span pair ([ei1,ei2]/[jj-1,jj-2]) are useful.
The DPDI Framework
probability of word pairs Within the span pair):
word pairs is mentioned in 14 sentences in this paper.
Topics mentioned in this paper:
Jiang, Wenbin and Liu, Qun
Abstract
And we also propose an effective strategy for dependency projection, where the dependency relationships of the word pairs in the source language are projected to the word pairs of the target language, leading to a set of classification instances rather than a complete tree.
Conclusion and Future Works
In this paper, we first describe an intuitionistic method for dependency parsing, which resorts to a classifier to determine whether a word pair forms a dependency edge, and then propose an effective strategy for dependency projection, which produces a set of projected classification instances rather than complete projected trees.
Introduction
Given a word-aligned bilingual corpus with source language sentences parsed, the dependency relationships of the word pairs in the source language are projected to the word pairs of the target language.
Introduction
A dependency relationship is a boolean value that represents whether this word pair forms a dependency edge.
Word-Pair Classification Model
The task of the word-pair classification model is to determine whether any candidate word pair , :10, and 553- st. 1 g i,j g |x| andz' 75 j, forms a dependency edge.
Word-Pair Classification Model
Ideally, given the classification results for all candidate word pairs , the dependency parse tree can be composed of the candidate edges with higher score (1 for the boolean-valued classifier, and large p for the real-valued classifier).
Word-Pair Classification Model
Here we give the calculation of dependency probability C (7', j We use w to denote the parameter vector of the ME model, and f (7', j, 7ā€œ) to denote the feature vector for the assumption that the word pair 7' and j has a dependency relationship 7ā€œ.
word pairs is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Zhang, Duo and Mei, Qiaozhu and Zhai, ChengXiang
Introduction
However, the goals of their work are different from ours in that their models mainly focus on mining cross-lingual topics of matching word pairs and discovering the correspondence at the vocabulary level.
Introduction
Therefore, the topics extracted using their model cannot indicate how a common topic is covered diflerently in the two languages, because the words in each word pair share the same probability in a common topic.
Introduction
In our model, since we only add a soft constraint on word pairs in the dictionary, their probabilities in common topics are generally different, naturally capturing which shows the different variations of a common topic in different languages.
Probabilistic Cross-Lingual Latent Semantic Analysis
Thus when a cross-lingual topic picks up words that co-occur in monolingual text, it would prefer picking up word pairs whose translations in other languages also co-occur with each other, giving us a coherent multilingual word distribution that characterizes well the content of text in different languages.
word pairs is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Liu, Zhanyi and Wang, Haifeng and Wu, Hua and Li, Sheng
Collocation Model
Figure 1 shows an example of the potentially collocated word pairs aligned by the MWA method.
Collocation Model
Then the probability for each aligned word pair is estimated as follows:
Improving Phrase Table
word pair calculated according to Eq.
word pairs is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Prettenhofer, Peter and Stein, Benno
Cross-Language Structural Correspondence Learning
CL-SCL comprises three steps: In the first step, CL-SCL selects word pairs {2123,2127}, called pivots, where 7.03 E V3 and 2127 6 V7.
Cross-Language Structural Correspondence Learning
Considering our sentiment classification example, the word pair {excellent3, exzellentT} satisfies both conditions: (1) the words are strong indicators of positive sentiment,
Cross-Language Structural Correspondence Learning
Second, for each word 1115 6 VP we find its translation in the target vocabulary V7 by querying the translation oracle; we refer to the resulting set of word pairs as the candidate pivots, Pā€™ :
word pairs is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: