Index of papers in Proc. ACL 2009 that mention
  • learning algorithm
Kruengkrai, Canasai and Uchimoto, Kiyotaka and Kazama, Jun'ichi and Wang, Yiou and Torisawa, Kentaro and Isahara, Hitoshi
Background
The goal of our learning algorithm is to learn a mapping from inputs (unsegmented sentences) x E X to outputs (segmented paths) y E 3?
Conclusion
The second is a discriminative online learning algorithm based on MIRA that enables us to incorporate arbitrary features to our hybrid model.
Related work
In this section, we discuss related approaches based on several aspects of learning algorithms and search space representation methods.
Related work
Our approach overcomes the limitation of the original hybrid model by a discriminative online learning algorithm for training.
Training method
Therefore, we require a learning algorithm that can efficiently handle large and complex lattice structures.
Training method
Algorithm 1 Generic Online Learning Algorithm
Training method
Algorithm 1 outlines the generic online learning algorithm (McDonald, 2006) used in our framework.
learning algorithm is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Lin, Dekang and Wu, Xiaoyun
Introduction
Over the past decade, supervised learning algorithms have gained widespread acceptance in natural language processing (NLP).
Introduction
The learning algorithm then optimizes a regularized, convex objective function that is expressed in terms of these features.
Introduction
However, the supervised learning algorithms can typically identify useful clusters and assign proper weights to them, effectively adapting the clusters to the domain.
learning algorithm is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Huang, Fei and Yates, Alexander
Smoothing Natural Language Sequences
Formally, we define the smoothing task as follows: let D = {(X, z)|x is a word sequence, z is a label sequence} be a labeled dataset of word sequences, and let M be a machine learning algorithm that will learn a function f to predict the correct labels.
Smoothing Natural Language Sequences
As an example, consider the string “Researchers test reformulated gasolines on newer engines.” In a common dataset for NP chunking, the word “reformulated” never appears in the training data, but appears four times in the test set as part of the NP “reformulated gasolines.” Thus, a learning algorithm supplied with word-level features would
Smoothing Natural Language Sequences
In particular, we seek to represent each word by a distribution over its contexts, and then provide the learning algorithm with features computed from this distribution.
learning algorithm is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Yang, Qiang and Chen, Yuqiang and Xue, Gui-Rong and Dai, Wenyuan and Yu, Yong
Related Works
However, because the labeled Chinese Web pages are still not sufficient, we often find it difficult to achieve high accuracy by applying traditional machine learning algorithms to the Chinese Web pages directly.
Related Works
Most learning algorithms for dealing with cross-language heterogeneous data require a translator to convert the data to the same feature space.
Related Works
For those data that are in different feature spaces where no translator is available, Davis and Domingos (2008) proposed a Markov-logic-based transfer learning algorithm , which is called deep transfer, for transferring knowledge between biological domains and Web domains.
learning algorithm is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: