Index of papers in Proc. ACL that mention
  • PageRank
Zhao, Xin and Jiang, Jing and He, Jing and Song, Yang and Achanauparp, Palakorn and Lim, Ee-Peng and Li, Xiaoming
Abstract
We propose a context-sensitive topical PageRank method for keyword ranking and a probabilistic scoring function that considers both relevance and interestingness of keyphrases for keyphrase ranking.
Introduction
For keyword ranking, we modify the Topical PageRank method proposed by Liu et al.
Method
Next for each topic, we run a topical PageRank algorithm to rank keywords and then generate candidate keyphrases using the top ranked keywords (Section 3.3).
Method
3.3 Topical PageRank for Keyword Ranking
Method
Topical PageRank was introduced by Liu et al.
Related Work
Mihalcea and Tarau (2004) proposed to use TextRank, a modified PageRank algorithm to extract keyphrases.
PageRank is mentioned in 15 sentences in this paper.
Topics mentioned in this paper:
Yan, Rui and Lapata, Mirella and Li, Xiaoming
Experimental Setup
We set the damping factor ,u to 0.15 following the standard PageRank paradigm.
Results
PageRank 0.493 0.481 0.509 0.536 0.604 PersRank 0.501 0.542 0.558 0.560 0.611 DivRank 0.487 0.505 0.518 0.523 0.585 CoRank 0.519 0.546 0.550 0.585 0.617
Results
PageRank 0.557 0.549 0.623 0.559 0.588 PersRank 0.571 0.595 0.655 0.613 0.601 DivRank 0.538 0.591 0.594 0.547 0.589 CoRank 0.637 0.644 0.715 0.643 0.628
Results
Tables 3 and 4 show how the performance of our co-ranking algorithm varies when considering only tweet popularity using the standard PageRank algorithm, personalization (PersRank), and diversity (DivRank).
Tweet Recommendation Framework
Popularity We rank the tweet network following the PageRank paradigm (Erin and Page, 1998).
Tweet Recommendation Framework
Personalization The standard PageRank algorithm performs a random walk, starting from any node, then randomly selects a link from that node to follow considering the weighted matrix M, or jumps to a random node with equal probability.
Tweet Recommendation Framework
In contrast to PageRank , DivRank assumes that the transition probabilities change over time.
PageRank is mentioned in 11 sentences in this paper.
Topics mentioned in this paper:
Ogura, Yukari and Kobayashi, Ichiro
Abstract
On the other hand, we apply the PageRank algorithm to rank important words in each document.
Introduction
Whereas, we apply the PageRank algorithm (Brin et al., 1998) for the issue, because the algorithm scores the centrality of a node in a graph, and important words should be regarded as having the centrality (Hassan et al., 2007).
Related studies
ment for text classification, there are many studies which use the PageRank algorithm.
Related studies
They apply topic-specific PageRank to a graph of both words and documents, and introduce Polarity PageRank , a new semi-supervised sentiment classifier that integrates lexicon induction with document classification.
Related studies
As a study related to topic detection by important words obtained by the PageRank algorithm, Kubek et al.
Techniques for text classification
In particular, (Hassan et al., 2007) shows that the PageRank score is more clear to rank important words rather than tfidf.
Techniques for text classification
In this study, we refer to their method and use PageRank algorithm to decide important words.
PageRank is mentioned in 31 sentences in this paper.
Topics mentioned in this paper:
Rothe, Sascha and Schütze, Hinrich
Abstract
We present equivalent formalizations that show CoSimRank’s close relationship to Personalized PageRank and SimRank and also show how we can take advantage of fast matrix multiplication algorithms to compute CoSimRank.
CoSimRank
We first first give an intuitive introduction of CoSimRank as a Personalized PageRank (PPR) derivative.
CoSimRank
3.1 Personalized PageRank
CoSimRank
Haveliwala (2002) introduced Personalized PageRank — or topic-sensitive PageRank — based on the idea that the uniform damping vector 19(0) can be replaced by a personalized vector, which depends on node i.
Extensions
The use of weighted edges was first proposed in the PageRank patent.
Introduction
These algorithms are often based on PageRank (Erin and Page, 1998) and other centrality measures (e.g., (Erkan and Radev, 2004)).
Introduction
This paper introduces CoSimRank,1 a new graph-theoretic algorithm for computing node similarity that combines features of SimRank and PageRank .
Related Work
Another important similarity measure is cosine similarity of Personalized PageRank (PPR) vectors.
Related Work
LexRank (Erkan and Radev, 2004) is similar to PPR+cos in that it combines PageRank and cosine; it initializes the sentence similarity matrix of a document using cosine and then applies PageRank to compute lexical centrality.
Related Work
These approaches use at least one of cosine similarity, PageRank and SimRank.
PageRank is mentioned in 13 sentences in this paper.
Topics mentioned in this paper:
Zou, Bowei and Zhou, Guodong and Zhu, Qiaoming
Baselines
Besides, the PageRank algorithm (Page et al., 1998) is adopted to optimize the graph model.
Baselines
Finally, the weights of word nodes are calculated using the PageRank algorithm as follows:
Baselines
where d is the damping factor as in the PageRank algorithm.
Introduction
Besides, the standard PageRank algorithm is employed to optimize the graph model.
PageRank is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Feng, Song and Kang, Jun Seok and Kuznetsova, Polina and Choi, Yejin
Connotation Induction Algorithms
We develop induction algorithms based on three distinct types of algorithmic framework that have been shown successful for the analogous task of sentiment lexicon induction: HITS & PageRank (§2.1), Label/Graph Propagation (§2.2), and Constraint Optimization via Integer Linear Programming (§2.3).
Connotation Induction Algorithms
2.1 HITS & PageRank
Connotation Induction Algorithms
(2011) explored the use of HITS (Kleinberg, 1999) and PageRank (Page et al., 1999) to induce the general connotation of words hinging on the linguistic phenomena of selectional preference and semantic prosody, i.e., connotative predicates influencing the connotation of their arguments.
Experimental Result I
We find that the use of label propagation alone [PRED-ARG (CP)] improves the performance substantially over the comparable graph construction with different graph analysis algorithms, in particular, HITS and PageRank approaches of Feng et al.
PageRank is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Pilehvar, Mohammad Taher and Jurgens, David and Navigli, Roberto
A Unified Semantic Representation
To construct each semantic signature, we use the iterative method for calculating topic-sensitive PageRank (Haveliwala, 2002).
A Unified Semantic Representation
The PageRank may then be computed using:
A Unified Semantic Representation
For our semantic signatures we used the UKB2 off-the-shelf implementation of topic-sensitive PageRank .
Experiment 1: Textual Similarity
As our WSD system, we used UKB, a state-of-the-art knowledge-based WSD system that is based on the same topic-sensitive PageRank algorithm used by our approach.
PageRank is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Baumel, Tal and Cohen, Raphael and Elhadad, Michael
Algorithms
After creating the graph, PageRank is run to rank sentences.
Algorithms
Finally, instead of PageRank , we used SimRank (Haveliwala, 2002) to identify the nodes most similar to the query node and not only the central sentences in the graph.
Previous Work
After the graph is generated, the PageRank algorithm (Page et al., 1999) is used to determine the most central linguistic units in the graph.
Previous Work
PageRank spreads the query similarity of a vertex to its close neighbors, so that we rank higher sentences that are similar to other sentences which are similar to the query.
PageRank is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Gao, Wei and Blitzer, John and Zhou, Ming and Wong, Kam-Fai
Features and Similarities
Standard features for learning to rank include various query-document features, e. g., BM25 (Robertson, 1997), as well as query-independent features, e. g., PageRank (Erin and Page, 1998).
Features and Similarities
These include sets of measures such as BM25, language-model-based IR score, and PageRank .
Features and Similarities
PageRank PageRank score (Brin and Page, 1998)
PageRank is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Wang, Jia and Li, Qing and Chen, Yuanzhu Peter and Lin, Zhangxi
System Design
To do that, we employ a variant of the PageRank algorithm (Erin and Page, 1998).
System Design
Inline with the PageRank algorithm, we define the authority of user as
System Design
Considering the semantic similarity between nodes, we use another variant of the PageRank algorithm to calculate the weight of comment
PageRank is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Li, Fangtao and Gao, Yang and Zhou, Shuchang and Si, Xiance and Dai, Decheng
Experiments
When using the user graph as feature, we compute the authority score for each user with PageRank as shown in Equation 1.
Proposed Features
PageRank Score: We employ the PageRank (Page et al., 1999) score of each URL as popularity score.
Proposed Features
We compute the user’s authority score (AS) based on the link analysis algorithm PageRank:
PageRank is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Yan, Rui and Gao, Mingkun and Pavlick, Ellie and Callison-Burch, Chris
Evaluation
We set the damping factor ,u to 0.85, following the standard PageRank paradigm.
Problem Formulation
The standard PageRank algorithm starts from an arbitrary node and randomly selects to either follow a random outgoing edge (considering the weighted transition matrix) or to jump to a random node (treating all nodes with equal probability).
Problem Formulation
where 1 is a vector with all elements equaling to l and the size is correspondent to the size of V0 or VT. ,u is the damping factor usually set to 0.85, as in the PageRank algorithm.
PageRank is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: