Index of papers in Proc. ACL 2013 that mention
  • logistic regression
Choi, Jinho D. and McCallum, Andrew
Selectional branching
This can be expressed as a logistic regression:
Selectional branching
Algorithm 2 shows our adaptation of ADAGRAD with logistic regression for multi-class classification.
Selectional branching
Note that when used with logistic regression , ADAGRAD takes a regular gradient instead of a subgradient method for updating weights.
logistic regression is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Lucas, Michael and Downey, Doug
Problem Definition
4.2.2 Logistic Regression
Problem Definition
We implemented Logistic Regression using L2-Normalization, finding this to outperform Ll-Normalized and non-normalized versions.
Problem Definition
The strength of the normalization in the logistic regression required cross-validation, which we limited to 20 values logarithmically spaced between 10—4 and 104.
logistic regression is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Recasens, Marta and Danescu-Niculescu-Mizil, Cristian and Jurafsky, Dan
Automatically Identifying Biased Language
We trained a logistic regression model on a feature vector for every word that appears in the NPOV sentences from the training set, with the bias-inducing words as the positive class, and all the other words as the negative class.
Automatically Identifying Biased Language
The types of features used in the logistic regression model are listed in Table 3, together with their value space.
Automatically Identifying Biased Language
Logistic regression model that only uses the features based on Liu et al.’s (2005) lexicons of positive and negative words (i.e., features 26—29).
Conclusions
However, our logistic regression model reveals that epistemological and other features can usefully augment the traditional sentiment and subjectivity features for addressing the difficult task of identifying the bias-inducing word in a biased sentence.
logistic regression is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
O'Connor, Brendan and Stewart, Brandon M. and Smith, Noah A.
Data
Deduplication removes 8.5% of articles.5 For topic filtering, we apply a series of keyword filters to remove sports and finance news, and also apply a text classifier for diplomatic and military news, trained on several hundred manually labeled news articles (using El-regularized logistic regression with unigram and bigram features).
Experiments
We also create a baseline El-regularized logistic regression that uses normalized dependency path counts as the features (10,457 features).
Experiments
The verb-path logistic regression performs strongly at AUC 0.62; it outperforms all of the vanilla frame models.
Experiments
Green line is the verb-path logistic regression baseline.
logistic regression is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Yancheva, Maria and Rudzicz, Frank
Discussion and future work
While past research has used logistic regression as a binary classifier (Newman et al., 2003), our experiments show that the best-performing classifiers allow for highly nonlinear class boundaries; SVM and RF models achieve between 62.5% and 91.7% accuracy across age groups — a significant improvement over the baselines of LR and NB, as well as over previous results.
Related Work
These features were obtained with the Linguistic Inquiry and Word Count (LIWC) tool and used in a logistic regression classifier which achieved, on average, 61% accuracy on test data.
Results
We evaluate five classifiers: logistic regression (LR), a multilayer perceptron (MLP), nai've Bayes (NB), a random forest (RF), and a support vector machine (SVM).
Results
Here, na‘1've Bayes, which assumes conditional independence of the features, and logistic regression , which has a linear decision boundary, are baselines.
logistic regression is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Zhu, Jun and Zheng, Xun and Zhang, Bo
Experiments
For multi-class classification, one possible extension is to use a multinomial logistic regression model for categorical variables Y by using topic representations Z as input features.
Experiments
In fact, this is harder than the multinomial Bayesian logistic regression , which can be done via a coordinate strategy (Polson et al., 2012).
Introduction
More specifically, we extend Polson’s method for Bayesian logistic regression (Polson et al., 2012) to the generalized logistic supervised topic models, which are much more challeng-
Logistic Supervised Topic Models
Moreover, the latent variables Z make the inference problem harder than that of Bayesian logistic regression models (Chen et al., 1999; Meyer and Laud, 2002; Polson et al., 2012).
logistic regression is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Mayfield, Elijah and Adamson, David and Penstein Rosé, Carolyn
Cue Discovery for Content Selection
Before describing extensions to the baseline logistic regression model, we define notation.
Cue Discovery for Content Selection
We define classifiers as functions f (:E —> y E Y); in practice, we use logistic regression via LibLINEAR (Fan et al., 2008).
Experimental Results
We compare our methods against baselines including a majority baseline, a baseline logistic regression classifier with L2 regularized features, and two common ensemble methods, AdaBoost (Freund and Schapire, 1996) and bagging (Breiman, 1996) with logistic regression base c1assifiers5.
logistic regression is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: