Index of papers in Proc. ACL 2014 that mention
  • discourse parsing
Feng, Vanessa Wei and Hirst, Graeme
Abstract
Text-level discourse parsing remains a challenge.
Introduction
Discourse parsing is the task of identifying the presence and the type of the discourse relations between discourse units.
Introduction
While research in discourse parsing can be partitioned into several directions according to different theories and frameworks, Rhetorical Structure Theory (RST) (Mann and Thompson, 1988) is probably the most ambitious one, because it aims to identify not only the discourse relations in a small local context, but also the hierarchical tree structure for the full text: from the relations relating the smallest discourse units (called elementary discourse units, EDUs), to the ones connecting paragraphs.
Introduction
Conventionally, there are two major subtasks related to text-level discourse parsing : (l) EDU segmentation: to segment the raw text into EDUs, and (2) tree-building: to build a discourse tree from EDUs, representing the discourse relations in the text.
Related work
2.1 HILDA discourse parser
Related work
The HILDA discourse parser by Hernault et al.
Related work
(2010) is the first attempt at RST-style text-level discourse parsing .
discourse parsing is mentioned in 21 sentences in this paper.
Topics mentioned in this paper:
Ji, Yangfeng and Eisenstein, Jacob
Abstract
Text-level discourse parsing is notoriously difficult, as distinctions between discourse relations require subtle semantic judgments that are not easily captured using standard features.
Abstract
In this paper, we present a representation learning approach, in which we transform surface features into a latent space that facilitates RST discourse parsing .
Abstract
The resulting shift-reduce discourse parser obtains substantial improvements over the previous state-of-the-art in predicting relations and nuclearity on the RST Treebank.
Introduction
Unfortunately, the performance of discourse parsing is still relatively weak: the state-of-the-art F—measure for text-level relation detection in the RST Treebank is only slightly above 55% (Joty
Introduction
In this paper, we present a representation leam-ing approach to discourse parsing .
Introduction
Our method is implemented as a shift-reduce discourse parser (Marcu, 1999; Sagae, 2009).
Model
The core idea of this paper is to project lexical features into a latent space that facilitates discourse parsing .
Model
Thus, we name the approach DPLP: Discourse Parsing from Linear Projection.
Model
We apply transition-based (incremental) structured prediction to obtain a discourse parse , training a predictor to make the correct incremental moves to match the annotations of training data in the RST Treebank.
discourse parsing is mentioned in 19 sentences in this paper.
Topics mentioned in this paper:
Li, Sujian and Wang, Liang and Cao, Ziqiang and Li, Wenjie
Abstract
Previous researches on Text-level discourse parsing mainly made use of constituency structure to parse the whole document into one discourse tree.
Abstract
In this paper, we present the limitations of constituency based discourse parsing and first propose to use dependency structure to directly represent the relations between elementary discourse units (EDUs).
Abstract
Experiments show that our discourse dependency parsers achieve a competitive performance on text-level discourse parsing .
Add arc <eC,ej> to GC with
The third feature type (Position) is also very helpful to discourse parsing .
Discourse Dependency Parsing
Figure 5 shows the details of the Chu-Liu/Edmonds algorithm for discourse parsing .
Discourse Dependency Structure and Tree Bank
Section 3 presents the discourse parsing approach based on the Eisner and MST algorithms.
Introduction
Researches in discourse parsing aim to acquire such relations in text, which is fundamental to many natural language processing applications such as question answering, automatic summarization and so on.
Introduction
One important issue behind discourse parsing is the representation of discourse structure.
Introduction
1 EDU segmentation is a relatively trivial step in discourse parsing .
discourse parsing is mentioned in 23 sentences in this paper.
Topics mentioned in this paper:
Guzmán, Francisco and Joty, Shafiq and Màrquez, Llu'is and Nakov, Preslav
Abstract
We first design two discourse-aware similarity measures, which use all-subtree kernels to compare discourse parse trees in accordance with the Rhetorical Structure Theory.
Conclusions and Future Work
First, we defined two simple discourse-aware similarity metrics (lexicalized and un-lexicalized), which use the all-subtree kernel to compute similarity between discourse parse trees in accordance with the Rhetorical Structure Theory.
Introduction
One possible reason could be the unavailability of accurate discourse parsers .
Introduction
We first design two discourse-aware similarity measures, which use DTs generated by a publicly-available discourse parser (J oty et al., 2012); then, we show that they can help improve a number of MT evaluation metrics at the segment- and at the system-level in the context of the WMT11 and the WMT12 metrics shared tasks (Callison-Burch et al., 2011; Callison-Burch et al., 2012).
Our Discourse-Based Measures
In order to develop a discourse-aware evaluation metric, we first generate discourse trees for the reference and the system-translated sentences using a discourse parser , and then we measure the similarity between the two discourse trees.
Our Discourse-Based Measures
In Rhetorical Structure Theory, discourse analysis involves two subtasks: (i) discourse segmentation, or breaking the text into a sequence of EDUs, and (ii) discourse parsing , or the task of linking the units (EDUs and larger discourse units) into labeled discourse trees.
Our Discourse-Based Measures
(2012) proposed discriminative models for both discourse segmentation and discourse parsing at the sentence level.
Related Work
Compared to the previous work, (i) we use a different discourse representation (RST), (ii) we compare discourse parses using all-subtree kernels (Collins and Duffy, 2001), (iii) we evaluate on much larger datasets, for several language pairs and for multiple metrics, and (iv) we do demonstrate better correlation with human judgments.
discourse parsing is mentioned in 10 sentences in this paper.
Topics mentioned in this paper:
Jansen, Peter and Surdeanu, Mihai and Clark, Peter
CR + LS + DMM + DPM 39.32* +24% 47.86* +20%
This is a motivating result for discourse analysis, especially considering that the discourse parser was trained on a domain different from the corpora used here.
Experiments
Due to the speed limitations of the discourse parser , we randomly drew 10,000 QA pairs from the corpus of how questions described by Surdeanu et al.
Models and Features
4.2 Discourse Parser Model
Models and Features
The discourse parser model (DPM) is based on the RST discourse framework (Mann and Thompson, 1988).
Models and Features
However, this also introduces noise because discourse analysis is a complex task and discourse parsers are not perfect.
Related Work
In terms of discourse parsing , Verberne et al.
Related Work
Discourse Parser (deep)
Related Work
They later concluded that while discourse parsing appears to be useful for QA, automated discourse parsing tools are required before this approach can be tested at scale (Verbeme et al., 2010).
discourse parsing is mentioned in 10 sentences in this paper.
Topics mentioned in this paper: