Abstract | Moreover, we have introduced a regression model that boosts the observations of word co-occurrences used in the context-based projection method. |
Bilingual Lexicon Extraction | We then present an extension of this approach based on regression models . |
Bilingual Lexicon Extraction | First, while they experienced the linear regression model, we propose to contrast different regression models . |
Bilingual Lexicon Extraction | As most regression models have already been described in great detail (Christensen, 1997; Agresti, 2007), the derivation of most models is only briefly introduced in this work. |
Experiments and Results | Table 6: Results (MAP %) of the standard approach using different regression models on the balanced breast cancer and diabetes corpora |
Experiments and Results | 4.2.1 Regression Models Comparison |
Experiments and Results | We contrast the simple linear regression model (Lin) with the second and the third order polynomial regressions (Poly2 and P0ly3) and the logistic regression model (Logit). |
Introduction | To make them more reliable, our second contribution is to contrast different regression models in order to boost the observations of word co-occurrences. |
Copula Models for Text Regression | Our proposed semiparametric copula regression model takes a different perspective. |
Copula Models for Text Regression | Then we describe the proposed semiparametric Gaussian copula text regression model . |
Copula Models for Text Regression | We formulate the copula regression model as follows. |
Experiments | In the first experiment, we compare the proposed semiparametric Gaussian copula regression model to three baselines on three datasets with all features. |
Experiments | On the post—2009 dataset, none of results from the linear and nonlinear SVM models can match up with the linear regression model , but our proposed copula model still improves over all baselines by a large margin. |
Experiments | To understand the learning curve of our proposed copula regression model , we use the 25%, 50%, 75% subsets from the training data, and evaluate all four models. |
Adaptive MT Quality Estimation | The above QE regression model is trained on a portion of the sentences from the input document, and evaluated on the remaining sentences from the same document. |
Adaptive MT Quality Estimation | Therefore it is necessary to build a QB regression model that’s robust to different document-specific translation models. |
Adaptive MT Quality Estimation | We compute the TER of Tq using Rq as the reference, and train a QB regression model with the 26 features proposed in section 4.1. |
Related Work | Soricut and Echihabi (2010b) proposed various regression models to predict the expected BLEU score of a given sentence translation hypothesis. |
Static MT Quality Estimation | We experiment with several classifiers: linear regression model, decision tree based regression model and SVM model. |
Static MT Quality Estimation | Our experiments show that the decision tree-based regression model obtains the highest correlation coefficients (0.53) and lowest RMSE (0.23) in both the training and test sets. |
Intervention Prediction Models | Our logistic regression model uses the following two types of features: Thread only features and Aggregated post features. |
Intervention Prediction Models | p,- and h,- represent the posts of the thread and their latent categories respectively; 7“ represents the instructor’s intervention and gb(t) represent the nonstructural features used by the logistic regression model . |
Intervention Prediction Models | The logistic regression model is good at exploiting the thread level features but not the content of individual posts. |
Introduction | The first uses a logistic regression model that primarily incorporates high level information about threads and posts. |
Experiments | The SVM with linear kernels and the linear regression model used the same features as the manifold models. |
Experiments | By integrating unlabeled data, the manifold model under setting (1) made a 15% improvement over linear regression model on F1 score, where the improvement was significant across all relations. |
Introduction | Our model goes beyond regular regression models in that it applies constraints to those coefficients, such that the topology of the given data manifold will be respected. |
Introduction | Computing the optimal weights in a regression model and preserving manifold topology are conflicting objectives, we |
Experiments | In contrast, the Persona Regression model of Bamman et al. |
Experiments | The Persona Regression model of Bamman et al. |
Experiments | As expected, the Persona Regression model performs best at hypothesis class B (correctly judging two characters from the same author to be more similar to each other than to a character from a different author); this behavior is encouraged in this model by allowing an author (as an external metadata variable) to directly influence |
Experimental Results | The results reported are averaged over a 5-fold cross validation of the multiple regression model , where 80% of the SM data |
Experimental Setup | Subsequently, the feature extraction stage (a VSM or a MaxEnt model as the case may be) generates the syntactic complexity feature which is then incorporated in a multiple linear regression model to generate a score. |
Experimental Setup | As in prior studies, here too the level of agreement is evaluated by means of the weighted kappa measure as well as unrounded and rounded Pearson’s correlations between machine and human scores (since the output of the regression model can either be rounded or regarded as is). |