Experiments | We cannot compare with the results of Feng and Hirst (2012), because they do not evaluate on the overall discourse structure , but rather treat each relation as an indiVidual classification problem. |
Introduction | Discourse structure describes the high-level organization of text or speech. |
Introduction | Figure 1: An example of RST discourse structure . |
Model | Based on this observation, our goal is to learn a function that transforms lexical features into a much lower-dimensional latent representation, while simultaneously learning to predict discourse structure based on this latent representation. |
Related Work | Prior learning-based work has largely focused on lexical, syntactic, and structural features, but the close relationship between discourse structure and semantics (Forbes-Riley et al., 2006) suggests that shallow feature sets may struggle to capture the long tail of alternative lexicalizations that can be used to realize discourse relations (Prasad et al., 2010; Marcu and Echihabi, 2002). |
Related Work | In this work, we show how discourse structure annotations can function as a superVision signal to discriminatively learn a transformation from lexical features to a latent space that is well-suited for discourse parsing. |
Abstract | We present experiments in using discourse structure for improving machine translation evaluation. |
Conclusions and Future Work | In this paper we have shown that discourse structure can be used to improve automatic MT evaluation. |
Experimental Results | Overall, from the experimental results in this section, we can conclude that discourse structure is an important information source to be taken into account in the automatic evaluation of machine translation output. |
Related Work | Our experiments show that many existing metrics can benefit from additional knowledge about discourse structure . |
Abstract | We experimentally demonstrate that the discourse structure of non-factoid answers provides information that is complementary to lexical semantic similarity between question and answer, improving performance up to 24% (relative) over a state-of-the-art model that exploits lexical semantic similarity alone. |
Introduction | Driven by this observation, our main hypothesis is that the discourse structure of NF answers provides complementary information to state-of-the-art QA models that measure the similarity (either lexical and/or semantic) between question and answer. |
Models and Features | Argument labels indicate only if lemmas from the question were found in a discourse structure present in an answer candidate, and do not speak to the specific lemmas that were found. |
Models and Features | Second, these features model the intensity of the match between the text surrounding the discourse structure and the question text using both the assigned argument labels and the feature values. |
Generating summary from nested tree | The nucleus is more salient to the discourse structure , while the other span, the satellite, represents supporting information. |
Introduction | It is important for generated summaries to have a discourse structure that is similar to that of the source document. |
Introduction | Rhetorical Structure Theory (RST) (Mann and Thompson, 1988) is one way of introducing the discourse structure of a document to a summarization task (Marcu, 1998; Daume III and Marcu, 2002; Hirao et al., 2013). |
Add arc <eC,ej> to GC with | Soricut and Marcu (2003) use a standard bottom-up chart parsing algorithm to determine the discourse structure of sentences. |
Introduction | One important issue behind discourse parsing is the representation of discourse structure . |
Introduction | Here is the basic idea: the discourse structure consists of EDUs which are linked by the binary, asymmetrical relations called dependency relations. |