Index of papers in Proc. ACL 2014 that mention
  • overfitting
Pei, Wenzhe and Ge, Tao and Chang, Baobao
Abstract
Furthermore, a new tensor factorization approach is proposed to speed up the model and avoid overfitting .
Conclusion
Moreover, we propose a tensor factorization approach that effectively improves the model efficiency and avoids the risk of overfitting .
Introduction
by the design of features and the number of features could be so large that the result models are too large for practical use and prone to overfit on training corpus.
Introduction
Moreover, we propose a tensor factorization approach that effectively improves the model efficiency and prevents from overfitting .
Introduction
Not only does this approach improve the efficiency of our model but also it avoids the risk of overfitting .
Max-Margin Tensor Neural Network
Moreover, the additional tensor could bring millions of parameters to the model which makes the model suffer from the risk of overfitting .
Max-Margin Tensor Neural Network
As long as 7“ is small enough, the factorized tensor operation would be much faster than the un-factorized one and the number of free parameters would also be much smaller, which prevent the model from overfitting .
Related Work
However, given the small size of their tensor matrix, they do not have the problem of high time cost and overfitting problem as we faced in modeling a sequence labeling task like Chinese word segmentation.
Related Work
That’s why we propose to decrease computational cost and avoid overfitting with tensor factorization.
Related Work
By introducing tensor factorization into the neural network model for sequence labeling tasks, the model training and inference are speeded up and overfitting is prevented.
overfitting is mentioned in 10 sentences in this paper.
Topics mentioned in this paper:
Wang, Chang and Fan, James
Experiments
the approaches that completely depend on the labeled data are likely to run into overfitting .
Experiments
Linear SVM performed better than the other two, since the large-margin constraint together with the linear model constraint can alleviate overfitting .
Introduction
When we build a naive model to detect relations, the model tends to overfit for the labeled data.
Relation Extraction with Manifold Models
Integration of the unlabeled data can help solve overfitting problems when the labeled data is not sufficient.
Relation Extraction with Manifold Models
The second term is useful to bound the mapping function f and prevents overfitting from happening.
Relation Extraction with Manifold Models
0 The algorithm exploits unlabeled data, which helps prevent “overfitting” from happening.
overfitting is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Wang, William Yang and Hua, Zhenhao
Copula Models for Text Regression
On the other hand, once such assumptions are removed, another problem arises — they might be prone to errors, and suffer from the overfitting issue.
Copula Models for Text Regression
Therefore, coping with the tradeoff between expressiveness and overfitting , seems to be rather important in statistical approaches that capture stochastic dependency.
Copula Models for Text Regression
This is of crucial importance to modeling text data: instead of using the classic bag-of-words representation that uses raw counts, we are now working with uniform marginal CDFs, which helps coping with the overfitting issue due to noise and data sparsity.
Discussions
The second issue is about overfitting .
Experiments
On the pre-2009 dataset, we see that the linear regression and linear SVM perform reasonably well, but the Gaussian kernel SVM performs less well, probably due to overfitting .
overfitting is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Zhang, Hui and Chiang, David
Introduction
In speech and language processing, smoothing is essential to reduce overfitting , and Kneser-Ney (KN) smoothing (Kneser and Ney, 1995; Chen and Goodman, 1999) has consistently proven to be among the best-performing and most widely used methods.
Word Alignment
It also contains most of the model’s parameters and is where overfitting occurs most.
Word Alignment
However, MLE is prone to overfitting , one symptom of which is the “garbage collection” phenomenon where a rare English word is wrongly aligned to many French words.
Word Alignment
To reduce overfitting , we use expected KN smoothing during the M step.
overfitting is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Tamura, Akihiro and Watanabe, Taro and Sumita, Eiichiro
Introduction
This constraint prevents each model from overfitting to a particular direction and leads to global optimization across alignment directions.
Training
In addition, an [2 regularization term is added to the objective to prevent the model from overfitting the training data.
Training
The proposed constraint penalizes overfitting to a particular direction and enables two directional models to optimize across alignment directions globally.
overfitting is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: