Index of papers in Proc. ACL 2012 that mention
  • sentence-level
Feng, Vanessa Wei and Hirst, Graeme
Abstract
We also analyze the difficulty of extending traditional sentence-level discourse parsing to text-level parsing by comparing discourse-parsing performance under different discourse conditions.
Conclusions
We analyzed the difficulty of extending traditional sentence-level discourse parsing to text-level parsing by showing that using exactly the same set of features, the performance of Structure and Relation classification on cross-sentence instances is consistently inferior to that on within-sentence instances.
Introduction
difficulty with extending traditional sentence-level discourse parsing to text-level parsing, by comparing discourse parsing performance under different discourse conditions.
Text-level discourse parsing
Unlike syntactic parsing, where we are almost never interested in parsing above sentence level, sentence-level parsing is not sufficient for discourse parsing.
Text-level discourse parsing
While a sequence of local ( sentence-level ) grammaticality can be considered to be global grammaticality, a sequence of local discourse coherence does not necessarily form a globally coherent text.
Text-level discourse parsing
Text-level discourse parsing imposes more constraints on the global coherence than sentence-level discourse parsing.
sentence-level is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Chan, Wen and Zhou, Xiangdong and Wang, Wei and Chua, Tat-Seng
Experimental Results
which just utilize the independent sentence-level features, behave not vary well here, and there is no statistically significant performance difference between them.
Experimental Results
To see how much the different textual and non-textual features contribute to community answer summarization, the accumulated weight of each group of sentence-level features5 is presented in Figure 2.
The Summarization Framework
In this section, we give a detailed description of the different sentence-level cQA features and the contextual modeling between sentences used in our model for answer summarization.
The Summarization Framework
Sentence-level Features
The Summarization Framework
These sentence-level features can be easily utilized in the CRF framework.
sentence-level is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Shindo, Hiroyuki and Miyao, Yusuke and Fujino, Akinori and Nagata, Masaaki
Conclusion
We proposed a novel backoff modeling of an SR-TSG based on the hierarchical Pitman-Yor Process and sentence-level and tree-level blocked MCMC sampling for training our model.
Inference
In each splitting step, we use two types of blocked MCMC algorithm: the sentence-level blocked Metroporil-Hastings (MH) sampler and the tree-level blocked Gibbs sampler, while (Petrov et al., 2006) use a different MLE-based model and the EM algorithm.
Inference
Our sampler iterates sentence-level sampling and tree-level sampling alternately.
Inference
The sentence-level MH sampler is a recently proposed algorithm for grammar induction (Johnson et al., 2007b; Cohn et al., 2010).
sentence-level is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
He, Xiaodong and Deng, Li
Abstract
In the updating formula, we need to compute the sentence-level BLE U (En, E5).
Abstract
a non-clipped BP, BP = 60—9, for sentence-level BLEU].
Abstract
sentence-level BLEU (Exp.
sentence-level is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: