Document-level Parsing Approaches | A key finding from several previous studies on sentence-level discourse analysis is that most sentences have a well-formed discourse subtree in the full document-level DT (Joty et al., 2012; Fisher and Roark, 2007). |
Introduction | While recent advances in automatic discourse segmentation and sentence-level discourse parsing have attained accuracies close to human performance (Fisher and Roark, 2007; J oty et al., 2012), discourse parsing at the document-level still poses significant challenges (Feng and Hirst, 2012) and the performance of the existing document-level parsers (Hemault et al., 2010; Subba and Di-Eugenio, 2009) is still considerably inferior compared to human gold-standard. |
Introduction | Since most sentences have a well-formed discourse subtree in the full document-level DT (for example, the second sentence in Figure 1), our first approach constructs a DT for every sentence using our intra-sentential parser, and then runs the multi-sentential parser on the resulting sentence-level DTs. |
Introduction | Our second approach, in an attempt of dealing with these cases, builds sentence-level sub-trees by applying the intra-sentential parser on a sliding window covering two adjacent sentences and by then consolidating the results produced by over- |
Our Discourse Parsing Framework | Since we already have an accurate sentence-level discourse parser (J oty et al., 2012), a straightforward approach to document-level parsing could be to simply apply this parser to the whole document. |
Our Discourse Parsing Framework | For example, syntactic features like dominance sets (Soricut and Marcu, 2003) are extremely useful for sentence-level parsing, but are not even applicable in multi-sentential case. |
Parsing Models and Parsing Algorithm | Recently, we proposed a novel parsing model for sentence-level discourse parsing (J oty et al., 2012), that outperforms previous approaches by effectively modeling sequential dependencies along with structure and labels jointly. |
Parsing Models and Parsing Algorithm | The connections between adjacent nodes in a hidden layer encode sequential dependencies between the respective hidden nodes, and can enforce constraints such as the fact that a S]: 1 must not follow a Sj_1= l. The connections between the two hidden layers model the structure and the relation of a DT ( sentence-level ) constituent jointly. |
Parsing Models and Parsing Algorithm | Figure 52 Our parsing model applied to the sequences at different levels of a sentence-level DT. |
Related work | The idea of staging document-level discourse parsing on top of sentence-level discourse parsing was investigated in (Marcu, 2000a; LeThanh et al., 2004). |
Abstract | Our approach advances state-of-the-art sentence-level event extraction, and even outperforms previous argument labeling methods which use external knowledge from other sentences and documents. |
Experiments | In addition to our baseline, we compare against the sentence-level system reported in Hong et a1. |
Experiments | Remarkably, compared to the cross-entity approach reported in (Hong et al., 2011), which attained 68.3% F1 for triggers and 48.3% for arguments, our approach with global features achieves even better performance on argument labeling although we only used sentence-level information. |
Experiments | We also show that it outperforms the sentence-level baseline reported in (J i and Grishman, 2008; Liao and Grishman, 2010), both of which attained 59.7% F1 for triggers and 36.6% for arguments. |
Introduction | Different from traditional pipeline approach, we present a novel framework for sentence-level event extraction, which predicts triggers and their arguments jointly (Section 3). |
Related Work | Ji and Grishman (2008) 59.7 36.6 sentence-level |
Abstract | We suggest a generation task that integrates discourse-level referring expression generation and sentence-level surface realization. |
Conclusion | We have presented a data-driven approach for investigating generation architectures that address discourse-level reference and sentence-level syntax and word order. |
Experiments | BLEU, sentence-level geometric mean of 1- to 4-gram precision, as in (Belz et al., 2011) |
Experiments | NIST, sentence-level n- gram overlap weighted in favour of less frequent n- grams, as in (Belz et al., 2011) |
Experiments | BLEUT, sentence-level BLEU computed on post-processed output where predicted referring expressions for victim and perp are replaced in the sentences (both gold and predicted) by their original role label, this score doeS not penalize lexical mismatches between corpus and system RES |
Introduction | Generating well-formed linguistic utterances from an abstract nonlinguistic input involves making a multitude of conceptual, discourse-level as well as sentence-level , lexical and syntactic decisions. |
Introduction | We integrate a discourse-level approach to REG with sentence-level surface realization in a data-driven framework. |
Abstract | As a paratactic language, sentence-level argument extraction in Chinese suffers much from the frequent occurrence of ellipsis with regard to inter-sentence arguments. |
Experimentation | However, our model can be an effective complement of the sentence-level English argument extraction systems since the performance of argument extraction is still low in English and using discourse-level information is a way to improve its performance, especially for those event mentions whose arguments spread in complex sentences. |
Related Work | In additional, there are only very few of them focusing on Chinese argument extraction and almost all aim to feature engineering and are based on sentence-level information and recast this task as an SRL-style task. |
Related Work | Liao and Grishman (2010) mainly focus on employing the cross-event consistency information to improve sentence-level trigger extraction and they also propose an inference method to infer the arguments following role consistency in a document. |