A Bayesian Model for Joint Unsupervised Induction of Sentiment, Aspect and Discourse Representations
Lazaridou, Angeliki and Titov, Ivan and Sporleder, Caroline

Article Structure

Abstract

We propose a joint model for unsupervised induction of sentiment, aspect and discourse information and show that by incorporating a notion of latent discourse relations in the model, we improve the prediction accuracy for aspect and sentiment polarity on the sub-sentential level.

Introduction

Ivan Titov Saarland University titov@mmci.uni—saarland.de

Topics

EDUs

Appears in 15 sentences as: EDUs (15)
In A Bayesian Model for Joint Unsupervised Induction of Sentiment, Aspect and Discourse Representations
  1. In other words, for two adjacent EDUs not connected by any of the above three relations, the prior probability of staying at the same topic and sentiment level is higher than picking a new topic and sentiment level (i.e.
    Page 3, “Introduction”
  2. Drawing model parameters First, at the corpus level, we draw a distribution 95 over four discourse relations: three relations as defined in Table 1 and an additional dummy relation 4 to indicate that there is no relation between two adjacent EDUs (N oRel atz'on).
    Page 4, “Introduction”
  3. These parameters encode the intuition that most pairs of EDUs do not exhibit a discourse relation relevant for the task (i.e.
    Page 4, “Introduction”
  4. This distribution encodes our beliefs about sentiment transitions between EDUs s and s + 1 related through c. For example, the distribution isameAltfil would assign higher probability mass to the positive sentiment polarity (+ 1) than to the other 2 sentiment levels (0, -1).
    Page 4, “Introduction”
  5. As the discourse relation between the EDUs has already been chosen, we have some expectations about the values of the sentiment and aspect of the following EDU, which are encoded by the distributions 2; and 5.
    Page 4, “Introduction”
  6. Dataset and Annotation The dataset we created consists of 13559 hotel reviews from TripAdvi-sor.com.6 Since our modeling is performed on the EDU level, all sentences where segmented using the SLSEG software package.7 As a result, our dataset consists of 322,935 EDUs .
    Page 5, “Introduction”
  7. For creating the gold standard, 9 annotators annotated a random subset of our dataset (65 reviews, 1541 EDUs ).
    Page 5, “Introduction”
  8. The annotators were presented with the whole review partitioned in EDUs and were asked to annotate every EDU with the aspect and sentiment (i.e.
    Page 5, “Introduction”
  9. The label rest captures cases where EDUs do not refer to any aspect or to a very rare aspect.
    Page 5, “Introduction”
  10. Table 4: Separate evaluation (F1) of the “marked” and the “unmarked” EDUs .
    Page 6, “Introduction”
  11. Table 5: Examples of EDUs where local information is not sufficiently informative.
    Page 6, “Introduction”

See all papers in Proc. ACL 2013 that mention EDUs.

See all papers in Proc. ACL that mention EDUs.

Back to top.

soft constraints

Appears in 3 sentences as: soft constraints (3)
In A Bayesian Model for Joint Unsupervised Induction of Sentiment, Aspect and Discourse Representations
  1. Without this property we would not be able to encode soft constraints imposed by the discourse relations.
    Page 3, “Introduction”
  2. These are only soft constraints that have to be taken into consideration along with the information provided by the aspect-sentiment model.
    Page 4, “Introduction”
  3. They use an integer linear programming framework to enforce agreement between classifiers and soft constraints provided by discourse annotations.
    Page 8, “Introduction”

See all papers in Proc. ACL 2013 that mention soft constraints.

See all papers in Proc. ACL that mention soft constraints.

Back to top.