Index of papers in Proc. ACL that mention
  • linear regression
Wang, William Yang and Hua, Zhenhao
Conclusion
Focusing on the three financial crisis related datasets, the proposed model significantly outperform the standard linear regression method in statistics and strong discriminative support vector regression baselines.
Experiments
The baselines are standard squared-loss linear regression , linear kernel SVM, and nonlinear (Gaussian) kernel SVM.
Experiments
We use the Statistical Toolbox’s linear regression implementation in Matlab, and LibSVM (Chang and Lin, 2011) for training and testing the SVM models.
Experiments
On the pre-2009 dataset, we see that the linear regression and linear SVM perform reasonably well, but the Gaussian kernel SVM performs less well, probably due to overfitting.
Introduction
To evaluate the performance of our approach, we compare with a standard squared loss linear regression baseline, as well as strong basehnes such as hnear and non:hnear support
Introduction
0 Our results significantly outperform standard linear regression and strong SVM baselines.
Related Work
Traditional discriminative models, such as linear regression and linear SVM, have been very popular in various text regression tasks, such as predicting movie revenues from reviews (Joshi et al., 2010), understanding the geographic lexical variation (Eisenstein et al., 2010), and predicting food prices from menus (Chahuneau et al., 2012).
linear regression is mentioned in 11 sentences in this paper.
Topics mentioned in this paper:
Foster, Mary Ellen and Giuliani, Manuel and Knoll, Alois
Building Performance Functions
The PARADISE model uses stepwise multiple linear regression to predict subjective user satisfaction based on measures representing the performance dimensions of task success, dialogue quality, and dialogue efficiency, resulting in a predictor function of the following form:
Building Performance Functions
Stepwise linear regression produces coefficients (wi) describing the relative contribution of each predictor to the user satisfaction.
Building Performance Functions
Using stepwise linear regression , we computed a predictor function for each of the subj ective measures that we gathered during our study: the mean score for each of the individual user-satisfaction categories (Table 4), the mean score across the whole questionnaire (the last line of Table 4), as well as the difference between the users’ emotional states before and after the study (the last line of Table 5).
Discussion
(2008) for linear regression models similar to those presented here were between 0.22 and 0.57.
Introduction
PARADISE uses stepwise multiple linear regression to model user satisfaction based on measures representing the performance dimensions of task success, dialogue quality, and dialogue efficiency, and has been applied to a wide range of systems (e. g., Walker et al., 2000; Litman and Pan, 2002; Moller et al., 2008).
linear regression is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Morin, Emmanuel and Hazem, Amir
Bilingual Lexicon Extraction
First, while they experienced the linear regression model, we propose to contrast different regression models.
Bilingual Lexicon Extraction
As we can not claim that the prediction of word co-occurrence counts is a linear problem, we consider in addition to the simple linear regression
Bilingual Lexicon Extraction
model (Lin), a generalized linear model which is the logistic regression model (Logit) and non linear regression models such as polynomial regression model (Polyn) of order n. Given an input vector cc E R", where $1,...,:cm represent features, we find a prediction 3) E R" for the co-occurrence count of a couple of words 3/ E R using one of the regression models presented below:
Experiments and Results
We contrast the simple linear regression model (Lin) with the second and the third order polynomial regressions (Poly2 and P0ly3) and the logistic regression model (Logit).
Experiments and Results
In this experiment, we chose to use the linear regression model (Lin) for the prediction part.
linear regression is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Wang, Chang and Fan, James
Experiments
We compare our approaches to three state-of-the-art approaches including SVM with convolution tree kernels (Collins and Duffy, 2001), linear regression and SVM with linear kernels (Scholkopf and Smola, 2002).
Experiments
The SVM with linear kernels and the linear regression model used the same features as the manifold models.
Experiments
The tree kemel-based approach and linear regression achieved similar F1 scores, while linear SVM made a 5% improvement over them.
Relation Extraction with Manifold Models
o The algorithm is computationally efficient at the apply time (as fast as linear regressions ).
linear regression is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Cao, Guihong and Robertson, Stephen and Nie, Jian-Yun
Regression Model for Alteration Selection
5.1 Linear Regression Model
Regression Model for Alteration Selection
There are several regression models, ranging from the simplest linear regression model to nonlinear alternatives, such as a neural network (Duda et al., 2001), a Regression SVM (Bishop, 2006).
Regression Model for Alteration Selection
For simplicity, we use linear regression model here.
linear regression is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Lampos, Vasileios and Preoţiuc-Pietro, Daniel and Cohn, Trevor
Abstract
Most existing methods treat these problems as linear regression , learning to relate word frequencies and other simple features to a known response variable (e. g., voting intention polls or financial indicators).
Experiments
The first makes a constant prediction of the mean value of the response variable y in the training set (By); the second predicts the last value of y (Blast); and the third baseline (LEN) is a linear regression over the terms using elastic net regularisation.
Introduction
The main theme of the aforementioned works is linear regression between word frequencies and a real-world quantity.
Methods
5 and 7), where we individually learn {W, ,8} and then {U , fl }; each step of the process is a standard linear regression problem with an 61/62 regulariser.
linear regression is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Sheykh Esmaili, Kyumars and Salavati, Shahin
Empirical Study
Table 2: Heaps’ Linear Regression
Empirical Study
As the curves in Figure 4 and the linear regression coefficients in Table 2 show, the growth rate of distinct words in both Sorani and Kurmanji Kurdish are higher than Persian and English.
Empirical Study
Table 3: Zipf’s Linear Regression
linear regression is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Soni, Sandeep and Mitra, Tanushree and Gilbert, Eric and Eisenstein, Jacob
Modeling factuality judgments
Table 3: Linear regression error rates for each feature group.
Modeling factuality judgments
We performed another set of linear regressions , again using the mean certainty rating as the dependent variable.
Modeling factuality judgments
Cue Words Figure 7: Linear regression coefficients for frequently-occurring cue words.
linear regression is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Pado, Sebastian and Galley, Michel and Jurafsky, Dan and Manning, Christopher D.
Expt. 2: Predicting Pairwise Preferences
We reuse the linear regression framework from Section 2 and predict pairwise preferences by predicting two absolute scores (as before) and comparing them.6
Introduction
We first explore the combination of traditional scores into a more robust ensemble metric with linear regression .
Regression-based MT Quality Prediction
We follow a similar idea, but use a regularized linear regression to directly predict human ratings.
linear regression is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Cai, Qingqing and Yates, Alexander
Experiments
Figure 3 shows a Precision-Recall (PR) curve for MATCHER and three baselines: a “Frequency” model that ranks candidate matches for TD by their frequency during the candidate identification step; a “Pattern” model that uses MATCHER’s linear regression model for ranking, but is restricted to only the pattern-based features; and an “Extractions” model that similarly restricts the ranking model to ReVerb features.
Extending a Semantic Parser Using a Schema Alignment
For W, we use a linear regression model whose features are the score from MATCHER, the probabilities from the Syn and Sem NBC models, and the average weight of all lexical entries in UBL with matching syntax and semantics.
Textual Schema Matching
The regression model is a linear regression with least-squares parameter estimation; we experimented with support vector regression models with nonlinear kernels, with no significant improvements in accuracy.
linear regression is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: