Optimizing Informativeness and Readability for Sentiment Summarization
Nishikawa, Hitoshi and Hasegawa, Takaaki and Matsuo, Yoshihiro and Kikui, Genichiro

Article Structure

Abstract

We propose a novel algorithm for sentiment summarization that takes account of informativeness and readability, simultaneously.

Introduction

The Web holds a massive number of reviews describing the sentiments of customers about products and services.

Optimizing Sentence Sequence

Formally, we define a summary 8* = (so, 31, .

Experiments

This section evaluates our method in terms of ROUGE score and readability.

Conclusion

This paper proposed a novel algorithm for sentiment summarization that takes account of informativeness and readability, simultaneously.

Topics

beam search

Appears in 4 sentences as: beam search (4)
In Optimizing Informativeness and Readability for Sentiment Summarization
  1. Our algorithm efficiently searches for the best sequence of sentences by using dynamic programming and beam search .
    Page 1, “Introduction”
  2. To alleviate this, we find an approximate solution by adopting the dynamic programming technique of the Held and Karp Algorithm (Held and Karp, 1962) and beam search .
    Page 3, “Optimizing Sentence Sequence”
  3. Therefore, we adopt the Held and Karp Algorithm and beam search to find approximate solutions.
    Page 3, “Optimizing Sentence Sequence”
  4. The preferred sequence is determined by using dynamic programming and beam search .
    Page 5, “Conclusion”

See all papers in Proc. ACL 2010 that mention beam search.

See all papers in Proc. ACL that mention beam search.

Back to top.

dynamic programming

Appears in 4 sentences as: dynamic programming (4)
In Optimizing Informativeness and Readability for Sentiment Summarization
  1. Our algorithm efficiently searches for the best sequence of sentences by using dynamic programming and beam search.
    Page 1, “Introduction”
  2. To alleviate this, we find an approximate solution by adopting the dynamic programming technique of the Held and Karp Algorithm (Held and Karp, 1962) and beam search.
    Page 3, “Optimizing Sentence Sequence”
  3. In the search procedure, our dynamic programming based algorithm retains just the hypothesis with maximum score among the hypotheses that have the same sentences and the same last sentence.
    Page 3, “Optimizing Sentence Sequence”
  4. The preferred sequence is determined by using dynamic programming and beam search.
    Page 5, “Conclusion”

See all papers in Proc. ACL 2010 that mention dynamic programming.

See all papers in Proc. ACL that mention dynamic programming.

Back to top.

named entity

Appears in 4 sentences as: named entities (1) named entity (3)
In Optimizing Informativeness and Readability for Sentiment Summarization
  1. To this, we add named entity tags (e.g.
    Page 3, “Optimizing Sentence Sequence”
  2. We observe that the first sentence of a review of a restaurant frequently contains named entities indicating location.
    Page 3, “Optimizing Sentence Sequence”
  3. Note that our method learns w from texts automatically annotated by a POS tagger and a named entity tagger.
    Page 3, “Optimizing Sentence Sequence”
  4. We used CRFs-based Japanese dependency parser (Imamura et al., 2007) and named entity recognizer (Suzuki et al., 2006) for sentiment extraction and constructing feature vectors for readability score, respectively.
    Page 4, “Experiments”

See all papers in Proc. ACL 2010 that mention named entity.

See all papers in Proc. ACL that mention named entity.

Back to top.

feature vector

Appears in 3 sentences as: feature vector (2) feature vectors (1)
In Optimizing Informativeness and Readability for Sentiment Summarization
  1. where, given two adjacent sentences 3,- and 3,41, ngb(sz-, 3H1), which measures the connectivity of the two sentences, is the inner product of w and gb(si, 3H1), w is a parameter vector and gb(si, 3141) is a feature vector of the two sentences.
    Page 3, “Optimizing Sentence Sequence”
  2. We also define feature vector (13(8) of the entire sequence 8 = (so, 31, .
    Page 3, “Optimizing Sentence Sequence”
  3. We used CRFs-based Japanese dependency parser (Imamura et al., 2007) and named entity recognizer (Suzuki et al., 2006) for sentiment extraction and constructing feature vectors for readability score, respectively.
    Page 4, “Experiments”

See all papers in Proc. ACL 2010 that mention feature vector.

See all papers in Proc. ACL that mention feature vector.

Back to top.

sentiment lexicon

Appears in 3 sentences as: sentiment lexicon (3)
In Optimizing Informativeness and Readability for Sentiment Summarization
  1. Sentiments are extracted using a sentiment lexicon and pattern matched from dependency trees of sentences.
    Page 2, “Optimizing Sentence Sequence”
  2. Note that since our method relies on only sentiment lexicon , extractable aspects are unlimited.
    Page 2, “Optimizing Sentence Sequence”
  3. 1Since we aim to summarize Japanese reviews, we utilize Japanese sentiment lexicon (Asano et al., 2008).
    Page 2, “Optimizing Sentence Sequence”

See all papers in Proc. ACL 2010 that mention sentiment lexicon.

See all papers in Proc. ACL that mention sentiment lexicon.

Back to top.