Index of papers in Proc. ACL 2014 that mention
  • linear regression
Wang, William Yang and Hua, Zhenhao
Conclusion
Focusing on the three financial crisis related datasets, the proposed model significantly outperform the standard linear regression method in statistics and strong discriminative support vector regression baselines.
Experiments
The baselines are standard squared-loss linear regression , linear kernel SVM, and nonlinear (Gaussian) kernel SVM.
Experiments
We use the Statistical Toolbox’s linear regression implementation in Matlab, and LibSVM (Chang and Lin, 2011) for training and testing the SVM models.
Experiments
On the pre-2009 dataset, we see that the linear regression and linear SVM perform reasonably well, but the Gaussian kernel SVM performs less well, probably due to overfitting.
Introduction
To evaluate the performance of our approach, we compare with a standard squared loss linear regression baseline, as well as strong basehnes such as hnear and non:hnear support
Introduction
0 Our results significantly outperform standard linear regression and strong SVM baselines.
Related Work
Traditional discriminative models, such as linear regression and linear SVM, have been very popular in various text regression tasks, such as predicting movie revenues from reviews (Joshi et al., 2010), understanding the geographic lexical variation (Eisenstein et al., 2010), and predicting food prices from menus (Chahuneau et al., 2012).
linear regression is mentioned in 11 sentences in this paper.
Topics mentioned in this paper:
Morin, Emmanuel and Hazem, Amir
Bilingual Lexicon Extraction
First, while they experienced the linear regression model, we propose to contrast different regression models.
Bilingual Lexicon Extraction
As we can not claim that the prediction of word co-occurrence counts is a linear problem, we consider in addition to the simple linear regression
Bilingual Lexicon Extraction
model (Lin), a generalized linear model which is the logistic regression model (Logit) and non linear regression models such as polynomial regression model (Polyn) of order n. Given an input vector cc E R", where $1,...,:cm represent features, we find a prediction 3) E R" for the co-occurrence count of a couple of words 3/ E R using one of the regression models presented below:
Experiments and Results
We contrast the simple linear regression model (Lin) with the second and the third order polynomial regressions (Poly2 and P0ly3) and the logistic regression model (Logit).
Experiments and Results
In this experiment, we chose to use the linear regression model (Lin) for the prediction part.
linear regression is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Wang, Chang and Fan, James
Experiments
We compare our approaches to three state-of-the-art approaches including SVM with convolution tree kernels (Collins and Duffy, 2001), linear regression and SVM with linear kernels (Scholkopf and Smola, 2002).
Experiments
The SVM with linear kernels and the linear regression model used the same features as the manifold models.
Experiments
The tree kemel-based approach and linear regression achieved similar F1 scores, while linear SVM made a 5% improvement over them.
Relation Extraction with Manifold Models
o The algorithm is computationally efficient at the apply time (as fast as linear regressions ).
linear regression is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Soni, Sandeep and Mitra, Tanushree and Gilbert, Eric and Eisenstein, Jacob
Modeling factuality judgments
Table 3: Linear regression error rates for each feature group.
Modeling factuality judgments
We performed another set of linear regressions , again using the mean certainty rating as the dependent variable.
Modeling factuality judgments
Cue Words Figure 7: Linear regression coefficients for frequently-occurring cue words.
linear regression is mentioned in 4 sentences in this paper.
Topics mentioned in this paper: