Index of papers in Proc. ACL 2011 that mention
  • sentence-level
Hoffmann, Raphael and Zhang, Congle and Ling, Xiao and Zettlemoyer, Luke and Weld, Daniel S.
Abstract
This paper presents a novel approach for multi—instance learning with overlapping relations that combines a sentence-level extraction model with a simple, corpus-level component for aggregating the individual facts.
Inference
It is thus sufficient to independently compute an assignment for each sentence-level extraction variable 2,, ignoring the deterministic dependencies.
Introduction
0 MULTIR also produces accurate sentence-level predictions, decoding individual sentences as well as making corpus-level extractions.
Learning
We now present a multi-instance learning algorithm for our weak-supervision model that treats the sentence-level extraction random variables Z,- as latent, and uses facts from a database (6. g., Freebase) as supervision for the aggregate-level variables Y7".
Modeling Overlapping Relations
We define an undirected graphical model that allows joint reasoning about aggregate (corpus-level) and sentence-level extraction decisions.
Modeling Overlapping Relations
2, should be assigned a value 7“ E R only when :0, expresses the ground fact r(e), thereby modeling sentence-level extraction.
Modeling Overlapping Relations
(2009) sentence-level features in the ex-peiments, as described in Section 7.
Weak Supervision from a Database
In contrast, sentence-level extraction must justify each extraction with every sentence which expresses the fact.
sentence-level is mentioned in 17 sentences in this paper.
Topics mentioned in this paper:
Chen, Harr and Benson, Edward and Naseem, Tahira and Barzilay, Regina
Experimental Setup
For these reasons, we evaluate on both sentence-level and token-level precision, recall, and F-score.
Experimental Setup
Note that sentence-level scores are always at least as high as token-level scores, since it is possible to select a sentence correctly but none of its true relation tokens while the opposite is not possible.
Results
In light of our strong sentence-level performance, this suggests a possible human-assisted application: use our model to identify promising relation-bearing sentences in a new domain, then have a human annotate those sentences for use by a supervised approach to achieve optimal token-level extraction.
sentence-level is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Lo, Chi-kiu and Wu, Dekai
Abstract
Table 3: Sentence-level correlation with human adequacy judgments, across the evaluation metrics.
Abstract
Table 5: Sentence-level correlation with human adequacy judgments, for monolinguals vs. bilinguals.
Abstract
Table 8: Sentence-level correlation with human adequacy judgments.
sentence-level is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Lu, Bin and Tan, Chenhao and Cardie, Claire and K. Tsou, Benjamin
A Joint Model with Unlabeled Parallel Text
In this study, we focus on sentence-level sentiment classification, i.e.
Introduction
Not surprisingly, most methods for sentiment classification are supervised learning techniques, which require training data annotated with the appropriate sentiment labels (e. g. document-level or sentence-level positive vs. negative polarity).
Introduction
Although our approach should be applicable at the document-level and for additional sentiment tasks, we focus on sentence-level polarity classification in this work.
sentence-level is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: