Experiments | We compare our framework with several other methods, including state-of-the-art machine learning, relation extraction and common domain adaptation methods. |
Experiments | Adaptive domain bootstrapping (DAB) This is an instance-based domain adaptation method for relation extraction (Xu et al., 2010). |
Experiments | Structural correspondence learning (SCL) We use the feature-based domain adaptation approach by Blitzer et a1. |
Introduction | To tackle these challenges, we propose a two-phase Robust Domain Adaptation (RDA) framework. |
Introduction | We compare the proposed two-phase framework with state-of-the-art domain adaptation baselines for the relation extraction task, and we find that our method outperforms the baselines. |
Problem Statement | This section defines the domain adaptation problem and describes our feature extraction scheme. |
Problem Statement | 3.1 Relation Extraction Domain Adaptation |
Problem Statement | We define domain adaptation as the problem of learning a classifier p for relation extraction in the target domain using the data sets D1, Du and DS, 5 = 1,. . |
Related Work | We address this by augmenting a small labeled data set with other information in the domain adaptation setting. |
Related Work | Domain adaptation methods can be classified broadly into weakly-supervised adaptation (Daume and Marcu, 2007; Blitzer et al., 2006; Jiang and Zhai, 2007a; Jiang, 2009), and unsupervised adaptation (Pan et al., 2010; Blitzer et al., 2006; Plank and Moschitti, 2013). |
Robust Domain Adaptation | , fc using the one-versus-rest decoding for multi-class classification.2 Inspired by the Domain Adaptive Machine (Duan et al., 2009), we combine the reference predictions and the labeled data of the target domain to learn these functions: |
Abstract | This has fostered the development of domain adaptation techniques for relation extraction. |
Experiments | Moreover, in all the cases, regularization methods are still effective for domain adaptation of RE. |
Experiments | 6.3 Domain Adaptation with Word Embeddings |
Introduction | This is where we need to resort to domain adaptation techniques (DA) to adapt a model trained on one domain (the |
Introduction | Unfortunately, there is very little work on domain adaptation for RE. |
Introduction | as word clusters in domain adaptation of RE (Plank and Moschitti, 2013) is motivated by its successes in semi-supervised methods (Chan and Roth, 2010; Sun et al., 2011) where word representations help to reduce data-sparseness of lexical information in the training data. |
Regularization | Exploiting the shared interest in generalization performance with traditional machine learning, in domain adaptation for RE, we would prefer the relation extractor that fits the source domain data, but also circumvents the overfitting problem |
Related Work | However, none of these works evaluate word embeddings for domain adaptation |
Related Work | Regarding domain adaptation , in representation |
Related Work | Above all, we move one step further by evaluating the effectiveness of word embeddings on domain adaptation for RE which is very different from the principal topic of sequence labeling in the previous research. |
Discussion | We also discuss other approaches to improve unsupervised domain adaptation for SMT. |
Discussion | To our knowledge, however, this is the first work to use multilingual topic models for domain adaptation in machine translation. |
Discussion | Domain adaptation for language models (Bellegarda, 2004; Wood and Teh, 2009) is an important avenue for improving machine translation. |
Experiments | Domain Adaptation using Topic Models We examine the effectiveness of using topic models for domain adaptation on standard SMT evaluation metrics—BLEU (Papineni et al., 2002) and TER (Snover et al., 2006). |
Experiments | We refer to the SMT model without domain adaptation as baseline.5 LDA marginally improves machine translation (less than half a BLEU point). |
Introduction | Systems that are robust to systematic variation in the training set are said to exhibit domain adaptation . |
Introduction | We show that ptLDA offers better domain adaptation than other topic models for machine translation. |
Topic Models for Machine Translation | Before considering past approaches using topic models to improve SMT, we briefly review lexical weighting and domain adaptation for SMT. |
Topic Models for Machine Translation | Domain Adaptation for SMT Training a SMT system using diverse data requires domain adaptation . |
Topic Models for Machine Translation | This obviates the explicit smoothing used in other domain adaptation systems (Chiang et al., 2011). |
Abstract | We extend an existing MSA segmenter with a simple domain adaptation technique and new features in order to segment informal and dialectal Arabic text. |
Arabic Word Segmentation Model | 2.2 Domain adaptation |
Arabic Word Segmentation Model | The approach to domain adaptation we use is that of feature space augmentation (Daumé, 2007). |
Error Analysis | We sampled 100 errors randomly from all errors made by our final model (trained on all three datasets with domain adaptation and additional features) on the ARZ development set; see Table 4. |
Error Analysis | In this paper we demonstrate substantial gains on Arabic clitic segmentation for both formal and dialectal text using a single model with dialect-independent features and a simple domain adaptation strategy. |
Error Analysis | However, as data for other Arabic dialects and genres becomes available, we expect that the model’s simplicity and the domain adaptation method we use will allow the system to be applied to these dialects with minimal effort and without a loss of performance in the original domains. |
Experiments | Using domain adaptation alone helps performance on two of the three datasets (with a statistically insignificant decrease on broadcast news), and that our additional features further improve |
Introduction | Third, we show that dialectal data can be handled in the framework of domain adaptation . |
Introduction | Domain adaptation (DA) of sentiment classification becomes extremely challenging when the distributions of words in the source and the target domains are very different, because the features learnt from the source domain labeled reviews might not appear in the target domain reviews that must be classified. |
O \ | We evaluated the proposed method in two domain adaptation tasks: cross-domain POS tagging and cross-domain sentiment classification. |
O \ | Our experiments show that without requiring any task-specific customisations to our distribution prediction method, it outperforms competitive baselines and achieves comparable results to the current state-of-the-art domain adaptation methods. |
Experiments | Additionally, we considered a setting including a small amount of training data from the target data (i.e., supervised domain adaptation ). |
Introduction | 5.2), which give the possibility to study the domain adaptability of the supervised models by training on one category and testing on the other (and vice versa). |
Related work | Therefore, rather than trying to build a specialized system for every new target domain, as it has been done in most prior work on domain adaptation (Blitzer et al., 2007; Daume, 2007), the domain adaptation problem boils down to finding a more robust system (Sszsgaard and Johannsen, 2012; Plank and Moschitti, 2013). |