Recognizing Authority in Dialogue with an Integer Linear Programming Constrained Model
Mayfield, Elijah and Penstein Rosé, Carolyn

Article Structure

Abstract

We present a novel computational formulation of speaker authority in discourse.

Introduction

In this work, we seek to formalize the ways speak-empmfimfimehwfiuMamm:deomfim a way that maintains a notion of discourse structure, and which can be aggregated to evaluate a speaker’s overall stance in a dialogue.

Background

The Negotiation framework, as formulated by the SFL community, places a special emphasis on how speakers function in a discourse as sources or recipients of information or action.

Topics

ILP

Appears in 8 sentences as: ILP (8)
In Recognizing Authority in Dialogue with an Integer Linear Programming Constrained Model
  1. We formulate our constraints using Integer Linear Programming ( ILP ).
    Page 6, “Background”
  2. No segmentation model is used and no ILP constraints are enforced.
    Page 7, “Background”
  3. ILP constraints are enforced between these models.
    Page 7, “Background”
  4. Contextual: This model uses our enhanced feature space from section 4.2, with no segmentation model and no ILP constraints enforced.
    Page 7, “Background”
  5. o Contextual+ILP: This model uses the enhanced feature spaces for both Negotiation labels and segment boundaries from section 4.2 to enforce ILP constraints.
    Page 7, “Background”
  6. (J oachims, 1999), and Leaming-Based Java for ILP inference (Rizzolo and Roth, 2010).
    Page 7, “Background”
  7. We observe a significant improvement when ILP constraints are applied to this model.
    Page 7, “Background”
  8. However, the gains found in the contextual model are somewhat orthogonal to the gains from using ILP constraints, as applying those constraints to the contextual model results in further performance gains (and a high T2 coefficient of 0.947).
    Page 7, “Background”

See all papers in Proc. ACL 2011 that mention ILP.

See all papers in Proc. ACL that mention ILP.

Back to top.

discourse structure

Appears in 7 sentences as: discourse structure (6) discourse’s structure (1)
In Recognizing Authority in Dialogue with an Integer Linear Programming Constrained Model
  1. In this work, we seek to formalize the ways speak-empmfimfimehwfiuMamm:deomfim a way that maintains a notion of discourse structure , and which can be aggregated to evaluate a speaker’s overall stance in a dialogue.
    Page 1, “Introduction”
  2. Constructs such as Initiative and Control (Whittaker and Stenton, 1988), which attempt to operationalize the authority over a discourse’s structure , fall under the umbrella of positioning.
    Page 1, “Introduction”
  3. Much work has examined the emergence of discourse structure from the choices speakers make at the linguistic and intentional level (Grosz and Sid-ner, 1986).
    Page 2, “Background”
  4. In prior work, the way that people influence discourse structure is described through the two tightly-related concepts of initiative and control.
    Page 2, “Background”
  5. However, that body of work focuses on influencing discourse structure through positioning.
    Page 2, “Background”
  6. We then enhance this classifier by adding constraints, which allow expert knowledge of discourse structure to be enforced in classification.
    Page 6, “Background”
  7. Our model includes a simple understanding of discourse structure while also encoding information about the types of moves used, and the certainty of a speaker as a source of information.
    Page 8, “Background”

See all papers in Proc. ACL 2011 that mention discourse structure.

See all papers in Proc. ACL that mention discourse structure.

Back to top.

bag-of-words

Appears in 6 sentences as: bag-of-words (6)
In Recognizing Authority in Dialogue with an Integer Linear Programming Constrained Model
  1. Our baseline approach to both problems is to use a bag-of-words model of the contribution, and use machine learning for classification.
    Page 5, “Background”
  2. We build a contextual feature space, described in section 4.2, to enhance our baseline bag-of-words model.
    Page 5, “Background”
  3. This is a distinction that a bag-of-words model would have difficulty with.
    Page 5, “Background”
  4. To incorporate the insights above into our model, we append features to our bag-of-words model.
    Page 5, “Background”
  5. 0 Baseline: This model uses a bag-of-words feature space as input to an SVM classifier.
    Page 7, “Background”
  6. We observe that the baseline bag-of-words model performs well above random chance (kappa of 0.465); however, its accuracy is still very low and its ability to predict Authoritativeness ratio of a speaker is not particularly high (T2 of 0.354 with ratios from manually labelled data).
    Page 7, “Background”

See all papers in Proc. ACL 2011 that mention bag-of-words.

See all papers in Proc. ACL that mention bag-of-words.

Back to top.

feature space

Appears in 6 sentences as: Feature Space (1) feature space (4) feature spaces (1)
In Recognizing Authority in Dialogue with an Integer Linear Programming Constrained Model
  1. We build a contextual feature space , described in section 4.2, to enhance our baseline bag-of-words model.
    Page 5, “Background”
  2. 4.2 Contextual Feature Space Additions
    Page 5, “Background”
  3. 0 Baseline: This model uses a bag-of-words feature space as input to an SVM classifier.
    Page 7, “Background”
  4. o Baseline+ILP: This model uses the baseline feature space as input to both classification and segmentation models.
    Page 7, “Background”
  5. Contextual: This model uses our enhanced feature space from section 4.2, with no segmentation model and no ILP constraints enforced.
    Page 7, “Background”
  6. o Contextual+ILP: This model uses the enhanced feature spaces for both Negotiation labels and segment boundaries from section 4.2 to enforce ILP constraints.
    Page 7, “Background”

See all papers in Proc. ACL 2011 that mention feature space.

See all papers in Proc. ACL that mention feature space.

Back to top.

segmentation model

Appears in 6 sentences as: segmentation model (4) segmentation models (2)
In Recognizing Authority in Dialogue with an Integer Linear Programming Constrained Model
  1. We also build in parallel a segmentation model to select 3,- from the set {new}, same}.
    Page 5, “Background”
  2. We build two segmentation models , one trained on contributions of less than four tokens, and another trained on contributions of four or more tokens, to distinguish between characteristics of contentful and non-contentful contributions.
    Page 6, “Background”
  3. No segmentation model is used and no ILP constraints are enforced.
    Page 7, “Background”
  4. o Baseline+ILP: This model uses the baseline feature space as input to both classification and segmentation models .
    Page 7, “Background”
  5. Contextual: This model uses our enhanced feature space from section 4.2, with no segmentation model and no ILP constraints enforced.
    Page 7, “Background”
  6. Our segmentation model was evaluated based on exact matches in boundaries.
    Page 7, “Background”

See all papers in Proc. ACL 2011 that mention segmentation model.

See all papers in Proc. ACL that mention segmentation model.

Back to top.

Linear Programming

Appears in 5 sentences as: Linear Programming (5)
In Recognizing Authority in Dialogue with an Integer Linear Programming Constrained Model
  1. We also provide a computational model for automatically annotating text using this coding scheme, using supervised learning enhanced by constraints implemented with Integer Linear Programming .
    Page 1, “Abstract”
  2. These constraints are formulated as boolean statements describing what a correct label sequence looks like, and are imposed on our model using an Integer Linear Programming formulation (Roth and Yih, 2004).
    Page 2, “Introduction”
  3. In section 4.3 we formalize these constraints using Integer Linear Programming .
    Page 5, “Background”
  4. 4.3 Constraints using Integer Linear Programming
    Page 6, “Background”
  5. We formulate our constraints using Integer Linear Programming (ILP).
    Page 6, “Background”

See all papers in Proc. ACL 2011 that mention Linear Programming.

See all papers in Proc. ACL that mention Linear Programming.

Back to top.

human judgments

Appears in 3 sentences as: human judgements (1) human judgments (2)
In Recognizing Authority in Dialogue with an Integer Linear Programming Constrained Model
  1. We show that this constrained model’s analyses of speaker authority correlates very strongly with expert human judgments (r2 coefficient of 0.947).
    Page 1, “Abstract”
  2. In section 5, this model is evaluated on a subset of the MapTask corpus (Anderson et al., 1991) and shows a high correlation with human judgements of authoritativeness (r2 = 0.947).
    Page 2, “Introduction”
  3. In general, however, we now have an automated model that is reliable in reproducing human judgments of authoritativeness.
    Page 8, “Background”

See all papers in Proc. ACL 2011 that mention human judgments.

See all papers in Proc. ACL that mention human judgments.

Back to top.