Index of papers in Proc. ACL that mention
  • edge weights
Feng, Song and Kang, Jun Seok and Kuznetsova, Polina and Choi, Yejin
Connotation Induction Algorithms
One possible way of constructing such a graph is simply connecting all nodes and assign edge weights proportionate to the word association scores, such as PMI, or distributional similarity.
Connotation Induction Algorithms
In particular, we consider an undirected edge between a pair of arguments a1 and a2 only if they occurred together in the “a1 and a2” or “a2 and a1” coordination, and assign edge weights as: —> —> w(a1 — a2) = CosineSim(ch>,ch>) = 4%—HalH Ha2H
Connotation Induction Algorithms
The edge weights in two subgraphs are normalized so that they are in the comparable range.9
Experimental Result I
2 The performance of graph propagation varies significantly depending on the graph topology and the corresponding edge weights .
Related Work
Although we employ the same graph propagation algorithm, our graph construction is fundamentally different in that we integrate stronger inductive biases into the graph topology and the corresponding edge weights .
edge weights is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Elson, David and Dames, Nicholas and McKeown, Kathleen
Extracting Conversational Networks from Literature
When such an adjacency is found, the length of the quote is added to the edge weight , under the hypothesis that the significance of the relationship between two individuals is proportional to the length of the dialogue that they exchange.
Extracting Conversational Networks from Literature
Finally, we normalized each edge’s weight by the length of the novel.
Extracting Conversational Networks from Literature
These coefficients are used for the edge weights .
Introduction
We then construct a network where characters are vertices and edges signify an amount of bilateral conversation between those characters, with edge weights corresponding to the frequency and length of their exchanges.
edge weights is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Celikyilmaz, Asli and Thint, Marcus and Huang, Zhiheng
Graph Based Semi-Supervised Learning for Entailment Ranking
So similarity between two q/a pairs 50,-, 533-, is represented with wij E 32””, i.e., edge weights , and is measured as:
Graph Based Semi-Supervised Learning for Entailment Ranking
As total entailment scores get closer, the larger their edge weights would be.
Graph Based Semi-Supervised Learning for Entailment Ranking
Thus, we modify edge weights in (1) as follows:
Graph Summarization
Our idea of summarization is to create representative vertices of data points that are very close to each other in terms of edge weights .
Graph Summarization
We identify the edge weights wfj between each node in the boundary Bf via (1), thus the boundary is connected.
Graph Summarization
If any testing vector has an edge between a labeled vector, then with the usage of the local density constraints, the edge weights will not not only be affected by that labeled node, but also how dense that node is within that part of the graph.
edge weights is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Jia, Zhongye and Zhao, Hai
Pinyin Input Method Model
The edge weight the negative logarithm of conditional probability P(Sj+1,k SM) that a syllable Sm- is followed by Sj+1,k, which is give by a bigram language model of pinyin syllables:
Pinyin Input Method Model
Similar to G8 , the edges are from one syllable to all syllables next to it and edge weights are the conditional probabilities between them.
Pinyin Input Method Model
0 Edges from the start vertex E (210 —> 21351) with edge weight
edge weights is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Das, Dipanjan and Petrov, Slav
Approach Overview
The edge weights between the foreign language trigrams are computed using a co-occurence based similarity function, designed to indicate how syntactically
Graph Construction
They considered a semi-supervised POS tagging scenario and showed that one can use a graph over trigram types, and edge weights based on distributional similarity, to improve a supervised conditional random field tagger.
Graph Construction
We use two different similarity functions to define the edge weights among the foreign vertices and between vertices from different languages.
Graph Construction
Table 1: Various features used for computing edge weights between foreign trigram types.
edge weights is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Han, Xianpei and Zhao, Jun
The Structural Semantic Relatedness Measure
Concretely, the semantic-graph is defined as follows: A semantic-graph is a weighted graph G = (V, E), where each node represents a distinct concept; and each edge between a pair of nodes represents the semantic relation between the two concepts corresponding to these nodes, with the edge weight indicating the strength of the semantic relation.
The Structural Semantic Relatedness Measure
That is, for each pair of extracted concepts, we identify whether there are semantic relations between them: 1) If there is only one semantic relation between them, we connect these two concepts with an edge, where the edge weight is the strength of the semantic relation; 2) If there is more than one semantic relations between them, we choose the most reliable semantic relation, i.e., we choose the semantic relation in the knowledge sources according to the order of WordNet, Wikipedia and NE Co-concurrence corpus (Suchanek et al., 2007).
The Structural Semantic Relatedness Measure
To simplify the description, we assign each node in se-mantic-graph an integer index from 1 to |V1 and use this index to represent the node, then we can write the adjacency matrix of the semantic-graph G as A, where A[i,j] or Ail- is the edge weight between node i and node j.
edge weights is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Alhelbawy, Ayman and Gaizauskas, Robert
Solution Graph
initial node and edge weights set to 1, edges being created wherever REF or J Prob are not zero).
Solution Graph
In the first experiment, referred to as PR1, initial confidence is used as an initial node rank for PR and edge weights are uniform, edges, as in the PR baseline, being created wherever REF or J Prob are not zero.
Solution Graph
In our second experiment, PRC, entity coherence features are tested by setting the edge weights to the coherence score and using uniform initial node weights.
edge weights is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Martschat, Sebastian
A Multigraph Model
In contrast to previous work on similar graph models we do not learn any edge weights from training data.
A Multigraph Model
We aim to employ a simple and efficient clustering scheme on this graph and therefore choose 1-nearest-neighbor clustering: for every m, we choose as antecedent m’s child n such that the sum of edge weights is maximal and positive.
Introduction
In contrast to previous models belonging to this class we do not learn any edge weights but perform inference on the graph structure only which renders our model unsupervised.
Related Work
Nicolae and Nicolae (2006) phrase coreference resolution as a graph clustering problem: they first perform pairwise classification and then construct a graph using the derived confidence values as edge weights .
edge weights is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Carenini, Giuseppe and Ng, Raymond T. and Zhou, Xiaodong
Empirical Evaluation
Table l: Generalized CWS with Different Edge Weights
Empirical Evaluation
Table 2 compares Page-Rank and CWS under different edge weights .
Empirical Evaluation
Moreover, when we compare Table l and 2 together, we can find that, for each kind of edge weight , Page-Rank has a lower accuracy than the corresponding Generalized CWS.
Summarization Based on the Sentence Quotation Graph
In the rest of this paper, let CWS denote the Generalized ClueWordSummarizer when the edge weight is based on clue words, and let CWS-Cosine and CWS-Semantic denote the summarizer when the edge weight is cosine similarity and semantic similarity respectively.
edge weights is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Zhao, Xin and Jiang, Jing and He, Jing and Song, Yang and Achanauparp, Palakorn and Lim, Ee-Peng and Li, Xiaoming
Method
However, the original TPR ignores the topic context when setting the edge weights; the edge weight is set by counting the number of co-occurrences of the two words within a certain window size.
Method
Taking the topic of “electronic products” as an example, the word “juice” may co-occur frequently with a good keyword “apple” for this topic because of Apple electronic products, so “juice” may be ranked high by this context-free co-occurrence edge weight although it is not related to electronic products.
Method
jiwj—WM (2) Here we compute the propagation from wj to w,- in the context of topic 75, namely, the edge weight from wj to w,- is parameterized by t. In this paper, we compute edge weight 6,; (wj, between two words by counting the number of co-occurrences of these two words in tweets assigned to topic 75.
edge weights is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Kim, Seokhwan and Lee, Gary Geunbae
Graph Construction
3.2 Edge Weights
Graph Construction
If U” is matched to uk, the edge weight w (Ui’j , is assigned to 1.
Graph Construction
The edge weight w(uk, ul) is computed by J accard’s coefficient between uk and ul.
edge weights is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Dasgupta, Anirban and Kumar, Ravi and Ravi, Sujith
Using the Framework
Furthermore, the edge weights 3(u, 2)) represent pairwise similarity between sentences or comments (e.g., similarity between views expressed in different comments).
Using the Framework
The edge weights are then used to define the inter-sentence distance metric d(u, v) for the different dispersion functions.
Using the Framework
The edge weights are then normalized across all edges in the
edge weights is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Yang, Hui and Callan, Jamie
The Metric-based Framework
Formally, it is a function d :C><C a [Rh where C is the set of terms in T. An ontology metric d on a taxonomy T with edge weights w
The Metric-based Framework
for any term pair (ox,cy)EC is the sum of all edge weights along the shortest path between the pair:
The Metric-based Framework
In the training data, an ontology metric d(c,,,cy) for a term pair (obey) is generated by assuming every edge weight as 1 and summing up all the edge weights along the shortest path from C, to Cy.
edge weights is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Lee, Taesung and Hwang, Seung-won
Methods
where B C 8% x 8% is a greedy approximate solution of maximum bipartite matching (West, 1999) on a bipartite graph GB 2 (VB 2 (8%, 6%), EB) with edge weights that are defined by T3.
Methods
that maximize the sum of the selected edge weights and that do not share a node as their anchor point.
Related Work
(2010; 2012) leverage two graphs of entities in each language, that are generated from a pair of corpora, with edge weights quantified as the strength of the relatedness of entities.
edge weights is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Razmara, Majid and Siahbani, Maryam and Haffari, Reza and Sarkar, Anoop
Graph-based Lexicon Induction
Let G = (V, E, W) be a graph where V is the set of vertices, E is the set of edges, and W is the edge weight matrix.
Graph-based Lexicon Induction
Intuitively, the edge weight W(u, 2)) encodes the degree of our belief about the similarity of the soft labeling for nodes u and v. A soft label K, 6 Am“ is a probability vector in (m + 1)-dimensional simplex, where m is the number of possible labels and the additional dimension accounts for the undefined J. label6.
Graph-based Lexicon Induction
The second term (2) enforces the smoothness of the labeling according to the graph structure and edge weights .
edge weights is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Hasan, Kazi Saidul and Ng, Vincent
Keyphrase Extraction Approaches
The edge weight is proportional to the syntactic and/or semantic relevance between the connected candidates.
Keyphrase Extraction Approaches
An edge weight in a SW graph denotes the word’s importance in the sentence it appears.
Keyphrase Extraction Approaches
Finally, an edge weight in a WW graph denotes the co-occurrence or knowledge-based similarity between the two connected words.
edge weights is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Pighin, Daniele and Cornolti, Marco and Alfonseca, Enrique and Filippova, Katja
Pattern extraction by sentence compression
Edge weight is defined as a linear function over a feature set: ’LU(€) = w - f(e).
Pattern extraction by sentence compression
Since we consider compressions with different lengths as candidates, from this set we select the one with the maximum averaged edge weight as the final compression.
Pattern extraction by sentence compression
Unlike it, the compression-based method keeps the essential prepositional phrase for divorce in the pattern because the average edge weight is greater for the tree with the prepositional phrase.
edge weights is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Rothe, Sascha and Schütze, Hinrich
Extensions
It is straightforward and easy to implement by replacing the row normalized adjacency matrix A with an arbitrary stochastic matrix P. We can use this edge weighted PageRank for CoSimRank.
Extensions
We tried a number of different ways of modifying it for weighted graphs: (i) running the random walks with the weighted adjacency matrix as Markov matrix, (ii) storing the weight (product of each edge weight ) of a random walk and using it as a factor if two walks meet and (iii) a combination of both.
Related Work
(2010) extend SimRank to edge weights , edge labels and multiple graphs.
edge weights is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: