Index of papers in Proc. ACL 2011 that mention
  • in-domain
He, Yulan and Lin, Chenghua and Alani, Harith
Abstract
We study the polarity-bearing topics extracted by J ST and show that by augmenting the original feature space with polarity-bearing topics, the in-domain supervised classifiers learned from augmented feature representation achieve the state-of-the-art performance of 95% on the movie review data and an average of 90% on the multi-domain sentiment dataset.
Introduction
We study the polarity-bearing topics extracted by the JST model and show that by augmenting the original feature space with polarity-bearing topics, the performance of in-domain supervised classifiers learned from augmented feature representation improves substantially, reaching the state-of-the-art results of 95% on the movie review data and an average of 90% on the multi-domain sentiment dataset.
Joint Sentiment-Topic (J ST) Model
The adaptation loss is calculated with respect to the in-domain gold standard classification result.
Joint Sentiment-Topic (J ST) Model
For example, the in-domain goal standard for the Book domain is 79.96%.
Joint Sentiment-Topic (J ST) Model
Table 3: Adaptation loss with respect to the in-domain gold standard.
in-domain is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Rüd, Stefan and Ciaramita, Massimiliano and Müller, Jens and Schütze, Hinrich
Abstract
We achieve strong gains in NER performance on news, in-domain and out-of-domain, and on web queries.
Conclusion
Even in-domain , we were able to get a smaller, but still noticeable improvement of 4.2% due to piggyback features.
Experimental data
In our experiments, we train an NER classifier on an in-domain data set and test it on two different out-of-domain data sets.
Experimental data
3A reviewer points out that we use the terms in-domain and out-of-domain somewhat liberally.
Experimental setup
For our in-domain evaluation, we tune T on a 10% development sample of the CoNLL data and test on the remaining 10%.
Related work
Another source of world knowledge for NER is Wikipedia: Kazama and Torisawa (2007) show that pseudocategories extracted from Wikipedia help for in-domain NER.
Results and discussion
Even though the emphasis of this paper is on cross-domain robustness, we can see that our approach also has clear in-domain benefits.
Results and discussion
ment due to piggyback features increases as out-of-domain data become more different from the in-domain training set, performance declines in absolute terms from .930 (CoNLL) to .681 (IEER) and .438 (KDD-T).
in-domain is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Chen, Harr and Benson, Edward and Naseem, Tahira and Barzilay, Regina
Conclusions
This paper has presented a constraint-based approach to in-domain relation discovery.
Experimental Setup
methods ultimately aim to capture domain-specific relations expressed with varying verbalizations, and both operate over in-domain input corpora supplemented with syntactic information.
Introduction
In this paper, we introduce a novel approach for the unsupervised learning of relations and their instantiations from a set of in-domain documents.
Introduction
Clusters of similar in-domain documents are
Model
Our work performs in-domain relation discovery by leveraging regularities in relation expression at the lexical, syntactic, and discourse levels.
in-domain is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Titov, Ivan
Empirical Evaluation
We compare them with two supervised methods: a supervised model (Base) which is trained on the source domain data only, and another supervised model ( In-domain ) which is learned on the labeled data from the target domain.
Empirical Evaluation
The Base model can be regarded as a natural baseline model, whereas the In-domain model is essentially an upper-bound for any domain-adaptation method.
Empirical Evaluation
First, observe that the total drop in the accuracy when moving to the target domain is 8.9%: from 84.6% demonstrated by the In-domain classifier to 75.6% shown by the non-adapted Base classifier.
Related Work
5 The drop in accuracy for the SCL method in Table 1 is is computed with respect to the less accurate supervised in-domain classifier considered in Blitzer et a1.
in-domain is mentioned in 4 sentences in this paper.
Topics mentioned in this paper: