Index of papers in Proc. ACL that mention
  • domain adaptation
Nguyen, Thien Huu and Grishman, Ralph
Abstract
This has fostered the development of domain adaptation techniques for relation extraction.
Experiments
Moreover, in all the cases, regularization methods are still effective for domain adaptation of RE.
Experiments
6.3 Domain Adaptation with Word Embeddings
Introduction
This is where we need to resort to domain adaptation techniques (DA) to adapt a model trained on one domain (the
Introduction
Unfortunately, there is very little work on domain adaptation for RE.
Introduction
as word clusters in domain adaptation of RE (Plank and Moschitti, 2013) is motivated by its successes in semi-supervised methods (Chan and Roth, 2010; Sun et al., 2011) where word representations help to reduce data-sparseness of lexical information in the training data.
Regularization
Exploiting the shared interest in generalization performance with traditional machine learning, in domain adaptation for RE, we would prefer the relation extractor that fits the source domain data, but also circumvents the overfitting problem
Related Work
However, none of these works evaluate word embeddings for domain adaptation
Related Work
Regarding domain adaptation , in representation
Related Work
Above all, we move one step further by evaluating the effectiveness of word embeddings on domain adaptation for RE which is very different from the principal topic of sequence labeling in the previous research.
domain adaptation is mentioned in 17 sentences in this paper.
Topics mentioned in this paper:
Plank, Barbara and Moschitti, Alessandro
Abstract
This is the problem of domain adaptation .
Abstract
The empirical evaluation on ACE 2005 domains shows that a suitable combination of syntax and lexical generalization is very promising for domain adaptation .
Introduction
This is the problem of domain adaptation (DA) or transfer learning (TL).
Introduction
Technically, domain adaptation addresses the problem of leam-ing when the assumption of independent and identically distributed (i.i.d.)
Introduction
Domain adaptation has been studied extensively during the last couple of years for various NLP tasks, e.g.
domain adaptation is mentioned in 22 sentences in this paper.
Topics mentioned in this paper:
Razmara, Majid and Foster, George and Sankaran, Baskaran and Sarkar, Anoop
Abstract
In this paper, we evaluate performance on a domain adaptation setting where we translate sentences from the medical domain.
Abstract
Our experimental results show that ensemble decoding outperforms various strong baselines including mixture models, the current state-of-the-art for domain adaptation in machine translation.
Conclusion & Future Work
In this paper, we presented a new approach for domain adaptation using ensemble decoding.
Ensemble Decoding
Each of these mixture operations has a specific property that makes it work in specific domain adaptation or system combination scenarios.
Ensemble Decoding
For instance, LOPs may not be optimal for domain adaptation in the setting where there are two or more models trained on heterogeneous corpora.
Introduction
Domain adaptation techniques aim at finding ways to adjust an out-of-domain (OUT) model to represent a target domain (in-domain or IN).
Introduction
We expect domain adaptation for machine translation can be improved further by combining orthogonal techniques for translation model adaptation combined with language model adaptation.
Introduction
The main applications of ensemble models are domain adaptation , domain mixing and system combination.
Related Work 5.1 Domain Adaptation
Early approaches to domain adaptation involved information retrieval techniques where sentence pairs related to the target domain were retrieved from the training corpus using IR methods (Eck et al., 2004; Hildebrand et al., 2005).
Related Work 5.1 Domain Adaptation
Other domain adaptation methods involve techniques that distinguish between general and domain-specific examples (Daumé and Marcu, 2006).
domain adaptation is mentioned in 11 sentences in this paper.
Topics mentioned in this paper:
Nguyen, Minh Luan and Tsang, Ivor W. and Chai, Kian Ming A. and Chieu, Hai Leong
Experiments
We compare our framework with several other methods, including state-of-the-art machine learning, relation extraction and common domain adaptation methods.
Experiments
Adaptive domain bootstrapping (DAB) This is an instance-based domain adaptation method for relation extraction (Xu et al., 2010).
Experiments
Structural correspondence learning (SCL) We use the feature-based domain adaptation approach by Blitzer et a1.
Introduction
To tackle these challenges, we propose a two-phase Robust Domain Adaptation (RDA) framework.
Introduction
We compare the proposed two-phase framework with state-of-the-art domain adaptation baselines for the relation extraction task, and we find that our method outperforms the baselines.
Problem Statement
This section defines the domain adaptation problem and describes our feature extraction scheme.
Problem Statement
3.1 Relation Extraction Domain Adaptation
Problem Statement
We define domain adaptation as the problem of learning a classifier p for relation extraction in the target domain using the data sets D1, Du and DS, 5 = 1,. .
Related Work
We address this by augmenting a small labeled data set with other information in the domain adaptation setting.
Related Work
Domain adaptation methods can be classified broadly into weakly-supervised adaptation (Daume and Marcu, 2007; Blitzer et al., 2006; Jiang and Zhai, 2007a; Jiang, 2009), and unsupervised adaptation (Pan et al., 2010; Blitzer et al., 2006; Plank and Moschitti, 2013).
Robust Domain Adaptation
, fc using the one-versus-rest decoding for multi-class classification.2 Inspired by the Domain Adaptive Machine (Duan et al., 2009), we combine the reference predictions and the labeled data of the target domain to learn these functions:
domain adaptation is mentioned in 17 sentences in this paper.
Topics mentioned in this paper:
He, Yulan and Lin, Chenghua and Alani, Harith
Introduction
The fact that the J ST model does not required any labeled documents for training makes it desirable for domain adaptation in sentiment classification.
Introduction
We proceed with a review of related work on sentiment domain adaptation .
Introduction
We subsequently show that words from different domains can indeed be grouped under the same polarity-bearing topic through an illustration of example topic words extracted by J ST before proposing a domain adaptation approach based on J ST. We verify our proposed approach by conducting experiments on both the movie review data
Joint Sentiment-Topic (J ST) Model
5 Domain Adaptation using J ST
Joint Sentiment-Topic (J ST) Model
Given input data cc and a class label 3/, labeled patterns of one domain can be drawn from the joint distribution P(:c, y) = Domain adaptation usually assume that data distribution are different in source and target domains, i.e., Ps(x) 75 Pt(:c).
Joint Sentiment-Topic (J ST) Model
The task of domain adaptation is to predict the label corresponding to in the target domain.
Related Work
There has been significant amount of work on algorithms for domain adaptation in NLP.
Related Work
for domain adaptation where a mixture model is defined to learn differences between domains.
Related Work
proposed structural correspondence learning (SCL) for domain adaptation in sentiment classification.
domain adaptation is mentioned in 15 sentences in this paper.
Topics mentioned in this paper:
Hu, Yuening and Zhai, Ke and Eidelman, Vladimir and Boyd-Graber, Jordan
Discussion
We also discuss other approaches to improve unsupervised domain adaptation for SMT.
Discussion
To our knowledge, however, this is the first work to use multilingual topic models for domain adaptation in machine translation.
Discussion
Domain adaptation for language models (Bellegarda, 2004; Wood and Teh, 2009) is an important avenue for improving machine translation.
Experiments
Domain Adaptation using Topic Models We examine the effectiveness of using topic models for domain adaptation on standard SMT evaluation metrics—BLEU (Papineni et al., 2002) and TER (Snover et al., 2006).
Experiments
We refer to the SMT model without domain adaptation as baseline.5 LDA marginally improves machine translation (less than half a BLEU point).
Introduction
Systems that are robust to systematic variation in the training set are said to exhibit domain adaptation .
Introduction
We show that ptLDA offers better domain adaptation than other topic models for machine translation.
Topic Models for Machine Translation
Before considering past approaches using topic models to improve SMT, we briefly review lexical weighting and domain adaptation for SMT.
Topic Models for Machine Translation
Domain Adaptation for SMT Training a SMT system using diverse data requires domain adaptation .
Topic Models for Machine Translation
This obviates the explicit smoothing used in other domain adaptation systems (Chiang et al., 2011).
domain adaptation is mentioned in 10 sentences in this paper.
Topics mentioned in this paper:
Sennrich, Rico and Schwenk, Holger and Aransa, Walid
Abstract
While domain adaptation techniques for SMT have proven to be effective at improving translation quality, their practicality for a multi-domain environment is often limited because of the computational and human costs of developing and maintaining multiple systems adapted to different domains.
Introduction
The effectiveness of domain adaptation approaches such as mixture-modeling (Foster and Kuhn, 2007) has been established, and has led to research on a wide array of adaptation techniques in SMT, for instance (Matsoukas et al., 2009; Shah et al., 2012).
Introduction
Therefore, when working with multiple and/or unlabelled domains, domain adaptation is often impractical for a number of reasons.
Introduction
Secondly, domain adaptation bears a risk of performance loss.
Translation Model Architecture
Our immediate purpose for this paper is domain adaptation in a multi-domain environment, but the delay of the feature computation has other potential applications, e.g.
Translation Model Architecture
The goal is to perform domain adaptation without requiring domain labels or user input, neither for development nor decoding.
Translation Model Architecture
Our theoretical expectation is that domain adaptation will fail to perform well if the test data is from
domain adaptation is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Kauchak, David
Conclusions and Future Work
Many other domain adaptations techniques exist and may produce language models with better performance.
Language Model Evaluation: Perplexity
This approach has the benefit of simplicity, however, better performance for combining related corpora has been seen by domain adaptation techniques which combine the data in more structured ways (Bacchiani and Roark, 2003).
Language Model Evaluation: Perplexity
Our goal for this paper is not to explore domain adaptation techniques, but to determine if normal data is useful for the simple language modeling task.
Language Model Evaluation: Perplexity
if domain adaptation techniques may be useful, we also investigated a linearly interpolated language model.
Related Work
If we view the normal data as out-of-domain data, then the problem of combining simple and normal data is similar to the language model domain adaption problem (Suzuki and Gao, 2005), in particular cross-domain adaptation (Bellegarda, 2004) where a domain-specific model is improved by incorporating additional general data.
Related Work
guage model domain adaptation problem for text simplification.
Related Work
Pan and Yang (2010) provide a survey on the related problem of domain adaptation for machine learning (also referred to as “transfer learning”), which utilizes similar techniques.
domain adaptation is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Monroe, Will and Green, Spence and Manning, Christopher D.
Abstract
We extend an existing MSA segmenter with a simple domain adaptation technique and new features in order to segment informal and dialectal Arabic text.
Arabic Word Segmentation Model
2.2 Domain adaptation
Arabic Word Segmentation Model
The approach to domain adaptation we use is that of feature space augmentation (Daumé, 2007).
Error Analysis
We sampled 100 errors randomly from all errors made by our final model (trained on all three datasets with domain adaptation and additional features) on the ARZ development set; see Table 4.
Error Analysis
In this paper we demonstrate substantial gains on Arabic clitic segmentation for both formal and dialectal text using a single model with dialect-independent features and a simple domain adaptation strategy.
Error Analysis
However, as data for other Arabic dialects and genres becomes available, we expect that the model’s simplicity and the domain adaptation method we use will allow the system to be applied to these dialects with minimal effort and without a loss of performance in the original domains.
Experiments
Using domain adaptation alone helps performance on two of the three datasets (with a statistically insignificant decrease on broadcast news), and that our additional features further improve
Introduction
Third, we show that dialectal data can be handled in the framework of domain adaptation .
domain adaptation is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Titov, Ivan
Abstract
We consider a semi-supervised setting for domain adaptation where only unlabeled data is available for the target domain.
Constraints on Inter-Domain Variability
As we discussed in the introduction, our goal is to provide a method for domain adaptation based on semi-supervised learning of models with distributed representations.
Constraints on Inter-Domain Variability
In this section, we first discuss the shortcomings of domain adaptation with the above-described semi-supervised approach and motivate constraints on inter-domain variability of
Constraints on Inter-Domain Variability
Another motivation for the form of regularization we propose originates from theoretical analysis of the domain adaptation problems (Ben-David et al., 2010; Mansour et al., 2009; Blitzer et al., 2007).
Introduction
One of the most promising research directions on domain adaptation for this setting is based on the idea of inducing a shared feature representation (Blitzer et al., 2006), that is mapping from the initial feature representation to a new representation such that (l) examples from both domains ‘look similar’ and (2) an accurate classifier can be trained in this new representation.
Related Work
There is a growing body of work on domain adaptation .
Related Work
Such methods tackle domain adaptation by instance re-weighting (Bickel et al., 2007; Jiang and Zhai, 2007), or, similarly, by feature re-weighting (Sat-pal and Sarawagi, 2007).
Related Work
Semi-supervised leam-ing with distributed representations and its application to domain adaptation has previously been considered in (Huang and Yates, 2009), but no attempt has been made to address problems specific to the domain-adaptation setting.
domain adaptation is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Li, Fangtao and Pan, Sinno Jialin and Jin, Ou and Yang, Qiang and Zhu, Xiaoyan
Abstract
In this paper, we propose a domain adaptation framework for sentiment- and topic- lexicon co-extraction in a domain of interest where we do not require any labeled data, but have lots of labeled data in another related domain.
Abstract
Experimental results show that our domain adaptation framework can extract precise lexicons in the target domain without any annotation.
Introduction
To address this problem, we propose a two-stage domain adaptation method.
Introduction
While, most of previous work focused on document level; 2) A new two-step domain adaptation framework, with a novel RAP algorithm for seed expansion, is proposed.
Introduction
2.2 Domain Adaptation
domain adaptation is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Zhu, Conghui and Watanabe, Taro and Sumita, Eiichiro and Zhao, Tiejun
Experiment
Even the performance of the pialign-linear is better than the Baseline GIZA-linear’s, which means that phrase pair extraction with hierarchical phrasal ITGs and sampling is more suitable for domain adaptation tasks than the combination GIZA++ and a heuristic method.
Hierarchical Phrase Table Combination
In traditional domain adaptation approaches, phrase pairs are extracted together with their probabilities and/or frequencies so that the extracted phrase pairs are merged uniformly or after scaling.
Introduction
Traditional domain adaption methods for SMT are also not adequate in this scenario.
Related Work
A number of approaches have been proposed to make use of the full potential of the available parallel sentences from various domains, such as domain adaptation and incremental learning for SMT.
Related Work
In the case of the previous work on translation modeling, mixed methods have been investigated for domain adaptation in SMT by adding domain information as additional labels to the original phrase table (Foster and Kuhn, 2007).
Related Work
As a way to choose the right domain for the domain adaption , a classifier-based method and a feature-based method have been proposed.
domain adaptation is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Miyao, Yusuke and Saetre, Rune and Sagae, Kenji and Matsuzaki, Takuya and Tsujii, Jun'ichi
Conclusion and Future Work
We also found that accuracy improvements vary when parsers are retrained with domain-specific data, indicating the importance of domain adaptation and the differences in the portability of parser training methods.
Conclusion and Future Work
2006), the C&C parser (Clark and Curran, 2004), the XLE parser (Kaplan et al., 2004), MINIPAR (Lin, 1998), and Link Parser (Sleator and Temperley, 1993; Pyysalo et al., 2006), but the domain adaptation of these parsers is not straightforward.
Evaluation Methodology
We also measure the accuracy improvements obtained by parser retraining with GENIA, to examine the domain portability, and to evaluate the effectiveness of domain adaptation .
Evaluation Methodology
Accuracy improvements in this setting indicate the possibility of domain adaptation , and the portability of the training methods of the parsers.
Experiments
When the parsers are retrained with GENIA (Table 2), the accuracy increases significantly, demonstrating that the WSJ-trained parsers are not sufficiently domain-independent, and that domain adaptation is effective.
Experiments
It is an important observation that the improvements by domain adaptation are larger than the differences among the parsers in the previous experiment.
Experiments
A large improvement from ENJU to ENJU-GENIA shows the effectiveness of the specifically designed domain adaptation method, suggesting that the other parsers might also benefit from more sophisticated approaches for domain adaptation .
Syntactic Parsers and Their Representations
In general, our evaluation methodology can be applied to English parsers based on any framework; however, in this paper, we chose parsers that were originally developed and trained with the Penn Treebank or its variants, since such parsers can be retrained with GENIA, thus allowing for us to investigate the effect of domain adaptation .
domain adaptation is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Zhang, Yi and Wang, Rui
Dependency Parsing with HPSG
Since we focus on the domain adaptation issue, we incorporate a less domain dependent language resource (i.e.
Experiment Results & Error Analyses
The same dataset has been used for the domain adaptation track of the CoNLL 2007 Shared Task.
Experiment Results & Error Analyses
This is the other datasets used in the domain adaptation track of the CoNLL 2007 Shared Task.
Experiment Results & Error Analyses
It should be noted that domain adaptation also presents a challenge to the disambiguation model of the HP SG parser.
Parser Domain Adaptation
In addition, most of the previous work have been focusing on constituent-based parsing, while the domain adaptation of the dependency parsing has not been fully explored.
Parser Domain Adaptation
This not to say that domain adaptation is
domain adaptation is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Prettenhofer, Peter and Stein, Benno
Abstract
We present a new approach to cross-language text classification that builds on structural correspondence learning, a recently proposed theory for domain adaptation .
Introduction
Our approach builds upon structural correspondence learning, SCL, a recently proposed theory for domain adaptation in the field of natural language processing (Blitzer et al., 2006).
Related Work
Domain Adaptation Domain adaptation refers to the problem of adapting a statistical classifier trained on data from one (or more) source domains (e.g., newswire texts) to a different target domain (e.g., legal texts).
Related Work
In the basic domain adaptation setting we are given labeled data from the source domain and unlabeled data from the target domain, and the goal is to train a classifier for the target domain.
Related Work
The latter setting is referred to as unsupervised domain adaptation .
domain adaptation is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Zhang, Jiajun and Zong, Chengqing
Abstract
We apply our method for the domain adaptation task and the extensive experiments show that our proposed method can substantially improve the translation quality.
Conclusion and Future Work
Extensive experiments on domain adaptation have shown that our method can significantly outperform previous methods which also focus on exploring the in-domain lexicon and monolingual data.
Experiments
Our purpose is to induce phrase pairs to improve translation quality for domain adaptation .
Introduction
Finally, they used the learned translation model directly to translate unseen data (Ravi and Knight, 2011; Nuhn et al., 2012) or incorporated the learned bilingual lexicon as a new in-domain translation resource into the phrase-based model which is trained with out-of-domain data to improve the domain adaptation performance in machine translation (Dou and Knight, 2012).
Introduction
The induced phrase-based model will be used to help domain adaptation for machine translation.
Introduction
Section 6 will show the detailed experiments for the task of domain adaptation .
Probabilistic Bilingual Lexicon Acquisition
In order to induce the phrase pairs from the in-domain monolingual data for domain adaptation , the probabilistic bilingual lexicon is essential.
domain adaptation is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Jiang, Jing
Introduction
Inspired by recent work on transfer learning and domain adaptation , in this paper, we study how we can leverage labeled data of some old relation types to help the extraction of a new relation type in a weakly-supervised setting, where only a few seed instances of the new relation type are available.
Related work
Domain adaptation is a special case of transfer learning where the leam-ing task remains the same but the distribution
Related work
There has been an increasing amount of work on transfer learning and domain adaptation in natural language processing recently.
Related work
(2006) proposed a structural correspondence learning method for domain adaptation and applied it to part-of-speech tagging.
domain adaptation is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Huang, Fei and Yates, Alexander
Conclusion and Future Work
One particularly promising area for further study is the combination of smoothing and instance weighting techniques for domain adaptation .
Experiments
3.3 Domain Adaptation
Experiments
For our experiment on domain adaptation , we focus on NP chunking and POS tagging, and we use the labeled training data from the CoNLL 2000 shared task as before.
Experiments
(2006): the semi-supervised Alternating Structural Optimization (ASO) technique and the Structural Correspondence Learning (SCL) technique for domain adaptation .
Related Work
One of the benefits of our smoothing technique is that it allows for domain adaptation , a topic that has received a great deal of attention from the NLP community recently.
Related Work
HMM-smoothing improves on the most closely related work, the Structural Correspondence Learning technique for domain adaptation (Blitzer et al., 2006), in experiments.
domain adaptation is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Arnold, Andrew and Nallapati, Ramesh and Cohen, William W.
Abstract
In the subproblem of domain adaptation , a model trained over a source domain is generalized to perform well on a related target domain, where the two domains’ data are distributed similarly, but not identically.
Abstract
We introduce the concept of groups of closely-related domains, called genres, and show how inter-genre adaptation is related to domain adaptation .
Introduction
When only the type of data being examined is allowed to vary (from news articles to e-mails, for example), the problem is called domain adaptation (Daumé III and Marcu, 2006).
Introduction
0 domain adaptation , where we assume Y (the set of possible labels) is the same for both DSOWCG and Dtafget, while DSOWCG and Dtafget themselves are allowed to vary between domains.
Introduction
Domain adaptation can be further distinguished by the degree of relatedness between the source and target domains.
domain adaptation is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Chen, Boxing and Kuhn, Roland and Foster, George
Abstract
This paper proposes a new approach to domain adaptation in statistical machine translation (SMT) based on a vector space model (VSM).
Introduction
Domain adaptation is an active topic in the natural language processing (NLP) research community.
Introduction
The 2012 JHU workshop on Domain Adaptation for MT 1 proposed phrase sense disambiguation (PSD) for translation model adaptation.
Introduction
In this paper, we propose a new instance weighting approach to domain adaptation based on a vector space model (VSM).
domain adaptation is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Jiang, Wenbin and Huang, Liang and Liu, Qun
Automatic Annotation Adaptation
ald, 2008), and is also similar to the Pred baseline for domain adaptation in (Daumé III and Marcu, 2006; Daumé III, 2007).
Conclusion and Future Works
We are especially grateful to Fernando Pereira and the anonymous reviewers for pointing us to relevant domain adaption references.
Introduction
The second problem, domain adaptation , is very well-studied, e.g.
Introduction
This method is very similar to some ideas in domain adaptation (Daume III and Marcu, 2006; Daume III, 2007), but we argue that the underlying problems are quite different.
Introduction
Domain adaptation assumes the labeling guidelines are preserved between the two domains, e.g., an adjective is always labeled as JJ regardless of from Wall Street Journal (WSJ) or Biomedical texts, and only the distributions are different, e. g., the word “control” is most likely a verb in WSJ but often a noun in Biomedical texts (as in “control experiment”).
domain adaptation is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Jiang, Wenbin and Sun, Meng and Lü, Yajuan and Yang, Yating and Liu, Qun
Conclusion and Future Work
Our strategy, therefore, enables us to build a classifier more domain adaptive and up to date.
Experiments
The classifier performs much worse on the domains of chemistry, physics and machinery, it indicates the importance of domain adaptation for word segmentation (Gao et al., 2004; Ma and Way, 2009; Gao et al., 2010).
Experiments
What is more, since the text on Internet is wide-coveraged and real-time updated, our strategy also helps a word segmenter be more domain adaptive and up to date.
Learning with Natural Annotations
It probably provides a simple and effective domain adaptation strategy for already trained models.
domain adaptation is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Zhang, Congle and Baldwin, Tyler and Ho, Howard and Kimelfeld, Benny and Li, Yunyao
Abstract
Additionally, we design a cus-tomizable framework to address the often overlooked concept of domain adaptability , and illustrate that the system allows for transfer to new domains with a minimal amount of data and effort.
Conclusions
This work presents a framework for normalization with an eye towards domain adaptation .
Evaluation
The goal is to evaluate the framework in two aspects: (1) usefulness for downstream applications (specifically dependency parsing), and (2) domain adaptability .
Related Work
Similarly, our work is the first to prioritize domain adaptation during the new wave of text message normalization.
domain adaptation is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Celikyilmaz, Asli and Hakkani-Tur, Dilek and Tur, Gokhan and Sarikaya, Ruhi
Experiments
This causes data-mismatch issues and hence provides a perfect testbed for a domain adaptation task.
Experiments
To evaluate the domain adaptation (DA) approach and to compare with results reported by (Subramanya et al., 2010), we use the first and second half of QuestionBank (Judge et al., 2006) as our development and test sets (target).
Experiments
5.3.2 Experiment 2: Domain Adaptation Task.
domain adaptation is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Bollegala, Danushka and Weir, David and Carroll, John
Conclusions
In future, we intend to apply the proposed method to other domain adaptation tasks.
Experiments
This can be considered to be a lower bound that does not perform domain adaptation .
Related Work
Compared to single-domain sentiment classification, which has been studied extensively in previous work (Pang and Lee, 2008; Tumey, 2002), cross-domain sentiment classification has only recently received attention in response to advances in the area of domain adaptation .
Related Work
Aue and Gammon (2005) report a number of empirical tests into domain adaptation of sentiment classifiers using an ensemble of classifiers.
domain adaptation is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Niu, Zheng-Yu and Wang, Haifeng and Wu, Hua
Introduction
Recently there have been some works on using multiple treebanks for domain adaptation of parsers, where these treebanks have the same grammar formalism (McClosky et al., 2006b; Roark and Bacchiani, 2003).
Related Work
Recently there have been some studies addressing how to use treebanks with same grammar formalism for domain adaptation of parsers.
Related Work
Roark and Bachiani (2003) presented count merging and model interpolation techniques for domain adaptation of parsers.
Related Work
Their results indicated that both unlabeled in-domain data and labeled out-of-domain data can help domain adaptation .
domain adaptation is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Andreevskaia, Alina and Bergler, Sabine
Conclusion
This study contributes to the research on sentiment tagging, domain adaptation , and the development of ensembles of classifiers (l) by proposing a novel approach for sentiment determination at sentence level and delineating the conditions under which greatest synergies among combined classifiers can be achieved, (2) by describing a precision-based technique for assigning differential weights to classifier results on different categories identified by the classifier (i.e., categories of positive vs. negative sentences), and (3) by proposing a new method for sentiment annotation in situations where the annotated in-domain data is scarce and insufficient to ensure adequate performance of the corpus-based classifier, which still remains the preferred choice when large volumes of annotated data are available for system training.
Domain Adaptation in Sentiment Research
(2007) applied structural correspondence learning (Drezde et al., 2007) to the task of domain adaptation for sentiment classification of product reviews.
Integrating the Corpus-based and Dictionary-based Approaches
In sentiment tagging and related areas, Aue and Gamon (2005) demonstrated that combining classifiers can be a valuable tool in domain adaptation for sentiment analysis.
Introduction
The first part of this paper reviews the extant literature on domain adaptation in sentiment analysis and highlights promising directions for research.
domain adaptation is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Green, Spence and Wang, Sida and Cer, Daniel and Manning, Christopher D.
Analysis
5.2 Domain Adaptation Analysis
Analysis
To understand the domain adaptation issue we compared the nonzero weights in the discriminative phrase table (PT) for Ar—En models tuned on bitext5k and MT05/6/ 8.
Introduction
Second, large bitexts often comprise many text genres (Haddow and Koehn, 2012), a virtue for classical dense MT models but a curse for high dimensional models: bitext tuning can lead to a significant domain adaptation problem when evaluating on standard test sets.
domain adaptation is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Wang, Lu and Cardie, Claire
Introduction
), we also make initial tries for domain adaptation so that our summarization method does not need human-written abstracts for each new meeting domain (e.g.
Results
Domain Adaptation Evaluation.
Results
We further examine our system in domain adaptation scenarios for decision and problem summarization, where we train the system on AMI for use on ICSI, and vice versa.
domain adaptation is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Bollegala, Danushka and Weir, David and Carroll, John
Introduction
Domain adaptation (DA) of sentiment classification becomes extremely challenging when the distributions of words in the source and the target domains are very different, because the features learnt from the source domain labeled reviews might not appear in the target domain reviews that must be classified.
O \
We evaluated the proposed method in two domain adaptation tasks: cross-domain POS tagging and cross-domain sentiment classification.
O \
Our experiments show that without requiring any task-specific customisations to our distribution prediction method, it outperforms competitive baselines and achieves comparable results to the current state-of-the-art domain adaptation methods.
domain adaptation is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Severyn, Aliaksei and Moschitti, Alessandro and Uryupina, Olga and Plank, Barbara and Filippova, Katja
Experiments
Additionally, we considered a setting including a small amount of training data from the target data (i.e., supervised domain adaptation ).
Introduction
5.2), which give the possibility to study the domain adaptability of the supervised models by training on one category and testing on the other (and vice versa).
Related work
Therefore, rather than trying to build a specialized system for every new target domain, as it has been done in most prior work on domain adaptation (Blitzer et al., 2007; Daume, 2007), the domain adaptation problem boils down to finding a more robust system (Sszsgaard and Johannsen, 2012; Plank and Moschitti, 2013).
domain adaptation is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: