Index of papers in Proc. ACL 2011 that mention
  • extraction system
Berg-Kirkpatrick, Taylor and Gillick, Dan and Klein, Dan
Data
To train the extractive system described in Section 2, we use as our labels y* the extractions with the largest bigram recall values relative to the sets of references.
Experiments
But, importantly, the gains achieved by the joint extractive and compressive system in content-based metrics do not come at the cost of linguistic quality when compared to purely extractive systems .
Experiments
The joint extractive and compressive system fits more word types into a summary than the extractive systems , but also produces longer sentences on average.
Experiments
Reading the output summaries more carefully suggests that by learning to extract and compress jointly, our joint system has the flexibility to use or create reasonable, medium-length sentences, whereas the extractive systems are stuck with a few valuable long sentences, but several less productive shorter sentences.
Introduction
For example, Zajic et al (2006) use a pipeline approach, preprocessing to yield additional candidates for extraction by applying heuristic sentence compressions, but their system does not outperform state-of-the-art purely extractive systems .
Introduction
A second contribution of the current work is to show a system for jointly learning to jointly compress and extract that exhibits gains in both ROUGE and content metrics over purely extractive systems .
Introduction
learns parameters for compression and extraction jointly using an approximate training procedure, but his results are not competitive with state-of-the-art extractive systems , and he does not report improvements on manual content or quality metrics.
Joint Model
Learning weights for Objective 1 where Y(:c) is the set of extractive summaries gives our LEARNED EXTRACTIVE system .
extraction system is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Sun, Ang and Grishman, Ralph and Sekine, Satoshi
Abstract
We present a simple semi-supervised relation extraction system with large-scale word clustering.
Conclusion and Future Work
We have described a semi-supervised relation extraction system with large-scale word clustering.
Feature Based Relation Extraction
(2005), a state-of—the-art feature based relation extraction system .
Introduction
For example, a relation extraction system needs to be able to extract an Employment relation between the entities US soldier and US in the phrase US soldier.
Introduction
The performance of a supervised relation extraction system is usually degraded by the sparsity of lexical features.
extraction system is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Krishnamurthy, Jayant and Mitchell, Tom
Background: Never-Ending Language Learner
NELL is an information extraction system that has been running 24x7 for over a year, using coupled semi-supervised learning to populate an ontology from unstructured text found on the web.
Background: Never-Ending Language Learner
As in other information extraction systems , the category and relation instances extracted by NELL contain polysemous and synonymous noun phrases.
Discussion
In order for information extraction systems to accurately represent knowledge, they must represent noun phrases, concepts, and the many-to-many mapping from noun phrases to concepts they denote.
Introduction
Many information extraction systems construct knowledge bases by extracting structured assertions from free text (e.g., NELL (Carlson et al., 2010), TextRunner (Banko et al., 2007)).
extraction system is mentioned in 4 sentences in this paper.
Topics mentioned in this paper: