Index of papers in Proc. ACL 2010 that mention
  • semantic similarity
Kazama, Jun'ichi and De Saeger, Stijn and Kuroda, Kow and Murata, Masaki and Torisawa, Kentaro
Introduction
The semantic similarity of words is a longstanding topic in computational linguistics because it is theoretically intriguing and has many applications in the field.
Introduction
A number of semantic similarity measures have been proposed based on this hypothesis (Hindle, 1990; Grefenstette, 1994; Dagan et al., 1994; Dagan et al., 1995; Lin, 1998; Dagan et al., 1999).
Introduction
In general, most semantic similarity measures have the following form:
semantic similarity is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Regneri, Michaela and Koller, Alexander and Pinkal, Manfred
Conclusion
We showed that our system outperforms two baselines and sometimes approaches human-level performance, especially because it can exploit the sequential structure of the script descriptions to separate clusters of semantically similar events.
Evaluation
with a weighted edge; the weight reflects the semantic similarity of the nodes’ event descriptions as described in Section 5.2.
Evaluation
Levenshtein Baseline: This system follows the same steps as our system, but using Levenshtein distance as the measure of semantic similarity for MSA and for node merging (cf.
Evaluation
The clustering system, which can’t exploit the sequential information from the ESDs, has trouble distinguishing semantically similar phrases (high recall, low precision).
Introduction
Crucially, our algorithm exploits the sequential structure of the ESDs to distinguish event descriptions that occur at different points in the script storyline, even when they are semantically similar .
Temporal Script Graphs
5.2 Semantic similarity
Temporal Script Graphs
Intuitively, we want the MSA to prefer the alignment of two phrases if they are semantically similar , i.e.
semantic similarity is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Chen, Boxing and Foster, George and Kuhn, Roland
Analysis and Discussion
Our aim in this paper is to characterize the semantic similarity of bilingual hierarchical rules.
Experiments
The improved similarity function Alg2 makes it possible to incorporate monolingual semantic similarity on top of the bilingual semantic similarity , thus it may improve the accuracy of the similarity estimate.
Introduction
The source and target sides of the rules with (*) at the end are not semantically equivalent; it seems likely that measuring the semantic similarity from their context between the source and target sides of rules might be helpful to machine translation.
Related Work
Our work is different from all the above approaches in that we attempt to discriminate among hierarchical rules based on: 1) the degree of bilingual semantic similarity between source and target translation units; and 2) the monolingual semantic similarity between occurrences of source or target units as part of the given rule, and in general.
Similarity Functions
A common way to calculate semantic similarity is by vector space cosine distance; we will also
Similarity Functions
Therefore, on top of the degree of bilingual semantic similarity between a source and a target translation unit, we have also incorporated the monolingual semantic similarity between all occurrences of a source or target unit, and that unit’s occurrence as part of the given rule, into the sense similarity measure.
semantic similarity is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Mitchell, Jeff and Lapata, Mirella and Demberg, Vera and Keller, Frank
Integrating Semantic Constraint into Surprisal
This can be achieved by turning a vector model of semantic similarity into a probabilistic language model.
Models of Processing Difficulty
Semantic similarities are then modeled in terms of geometric similarities within the space.
Models of Processing Difficulty
Despite its simplicity, the above semantic space (and variants thereof) has been used to successfully simulate lexical priming (e.g., McDonald 2000), human judgments of semantic similarity (Bullinaria and Levy 2007), and synonymy tests (Pado and Lapata 2007) such as those included in the Test of English as Foreign Language (TOEFL).
semantic similarity is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Sun, Jun and Zhang, Min and Tan, Chew Lim
Substructure Spaces for BTKs
In this section, we define seven lexical features to measure semantic similarity of a given subtree pair.
Substructure Spaces for BTKs
baseline only assesses semantic similarity using the lexical features.
Substructure Spaces for BTKs
In other words, to capture the semantic similarity , structure features requires lexical features to cooperate.
semantic similarity is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Wang, Jia and Li, Qing and Chen, Yuanzhu Peter and Lin, Zhangxi
Experimental Evaluation
semantic similarity , reply, and quotation.
System Design
On the one hand, the semantic similarity between two nodes can be measured with any commonly adopted metric, such as cosine similarity and J accard coefficient (Baeza—Yates and Ribeiro-Neto, 1999).
System Design
Considering the semantic similarity between nodes, we use another variant of the PageRank algorithm to calculate the weight of comment
semantic similarity is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: