Index of papers in Proc. ACL 2009 that mention
  • machine learning
Yoshikawa, Katsumasa and Riedel, Sebastian and Asahara, Masayuki and Matsumoto, Yuji
Abstract
By evaluating our model on the TempEval data we show that this approach leads to about 2% higher accuracy for all three types of relations —and to the best results for the task when compared to those of other machine learning based systems.
Introduction
With the introduction of the TimeBank corpu (Pustejovsky et al., 2003), a set of documents an notated with temporal information, it became pos sible to apply machine learning to temporal order ing (Boguraev and Ando, 2005; Mani et al., 2006} These tasks have been regarded as essential fo complete document understanding and are usefu for a wide range of NLP applications such as ques tion answering and machine translation.
Introduction
First, it allows us to use off-the-shelf machine learning software that, up until now, has been mostly focused on the case of local classifiers.
Introduction
Hence, in our future work we can focus entirely on temporal relations, as opposed to inference or learning techniques for machine learning .
Markov Logic
It has long been clear that local classification alone cannot adequately solve all prediction problems we encounter in practice.5 This observation motivated a field within machine learning , often referred to as Statistical Relational Learning (SRL), which focuses on the incorporation of global correlations that hold between statistical variables (Getoor and Taskar, 2007).
Results
Note that all but the strict scores of Task C are achieved by WVALI (Puscasu, 2007), a hybrid system that combines machine learning and hand-coded rules.
Temporal Relation Identification
With the introduction of the TimeBank corpus (Pustejovsky et al., 2003), machine learning approaches to temporal ordering became possible.
Temporal Relation Identification
Here one could argue that “the introduction of the TimeBank” may OVERLAP with “Machine learning becoming possible” because “introduction” can be understood as a process that is not finished with the release of the data but also includes later advertisements and announcements.
machine learning is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Beigman, Eyal and Beigman Klebanov, Beata
Introduction
For example, Osborne (2002) evaluates noise tolerance of shallow parsers, with random classification noise taken to be “crudely approximating annotation errors.” It has been shown, both theoretically and empirically, that this type of noise is tolerated well by the commonly used machine learning algorithms (Cohen, 1997; Blum et al., 1996; Osborne, 2002; Reidsma and Carletta, 2008).
Introduction
When training data comes from one annotator and test data from another, the first annotator’s biases are sometimes systematic enough for a machine learner to pick them up, with detrimental results for the algorithm’s performance on the test data.
Introduction
1The different biases might not amount to much in the small doubly annotated subset, resulting in acceptable inter-annotator agreement; yet when enacted throughout a large number of instances they can be detrimental from a machine learner’s perspective.
machine learning is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Li, Tao and Zhang, Yi and Sindhwani, Vikas
Experiments
In particular, the use of SVMs in (Pang et al., 2002) initially sparked interest in using machine learning methods for sentiment classification.
Introduction
These methodologies are likely to be rooted in natural language processing and machine learning techniques.
Introduction
Automatically classifying the sentiment expressed in a blog around selected topics of interest is a canonical machine learning task in this discussion.
Introduction
However, the treatment of such dictionaries as forms of prior knowledge that can be incorporated in machine learning models is a relatively less explored topic; even lesser so in conjunction with semi-supervised models that attempt to utilize un-
Related Work
In this section, we briskly cover related work to position our contributions appropriately in the sentiment analysis and machine learning literature.
Related Work
In this regard, our model brings two interrelated but distinct themes from machine learning to bear on this problem: semi-supervised learning and learning from labeled features.
Related Work
Most work in machine learning literature on utilizing labeled features has focused on using them to generate weakly labeled examples that are then used for standard supervised learning: (Schapire et al., 2002) propose one such framework for boosting logistic regression; (Wu and Srihari, 2004) build a modified SVM and (Liu et al., 2004) use a combination of clustering and EM based methods to instantiate similar frameworks.
machine learning is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Ceylan, Hakan and Kim, Yookyung
Introduction
We show that the data generated this way is highly reliable and can be used to train a machine learning algorithm.
Language Identification
We then combine all three models in a machine learning framework using a novel approach.
Language Identification
This way, we built a robust machine learning framework at a very low cost and without any human labour.
Language Identification
We used the Weka Machine Learning Toolkit (Witten and Frank, 2005) to implement our DT classifier.
machine learning is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Ohno, Tomohiro and Murata, Masaki and Matsubara, Shigeki
Abstract
Our method appropriately inserts linefeeds into a sentence by machine learning , based on the information such as dependencies, clause boundaries, pauses and line length.
Conclusion
Our method can insert linefeeds so that captions become easy to read, by using machine learning techniques on features such as morphemes, dependencies, clause boundaries, pauses and line length.
Introduction
In our method, the linefeeds are inserted into only the boundaries between bunset-susl, and the linefeeds are appropriately inserted into a sentence by machine learning , based on the information such as morphemes, dependencies2, clause boundaries, pauses and line length.
Preliminary Analysis about Linefeed Points
In our research, the points into which linefeeds should be inserted is detected by using machine learning .
machine learning is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Yang, Qiang and Chen, Yuqiang and Xue, Gui-Rong and Dai, Wenyuan and Yu, Yong
Introduction
Traditional machine learning relies on the availability of a large amount of data to train a model, which is then applied to test data in the same feature space.
Introduction
Various machine learning strategies have been proposed to address this problem, including semi-supervised learning (Zhu, 2007), domain adaptation (Wu and Diet-terich, 2004; Blitzer et al., 2006; Blitzer et al., 2007; Arnold et al., 2007; Chan and Ng, 2007; Daume, 2007; Jiang and Zhai, 2007; Reichart
Introduction
To consider how heterogeneous transfer learning relates to other types of learning, Figure 1 presents an intuitive illustration of four learning strategies, including traditional machine learning , transfer learning across different distributions, multi-view learning and heterogeneous transfer learning.
Related Works
However, because the labeled Chinese Web pages are still not sufficient, we often find it difficult to achieve high accuracy by applying traditional machine learning algorithms to the Chinese Web pages directly.
machine learning is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Zhang, Yi and Wang, Rui
Introduction
In combination with machine learning methods, several statistical dependency parsing models have reached comparable high parsing accuracy (McDonald et al., 2005b; Nivre et al., 2007b).
Parser Domain Adaptation
In recent years, two statistical dependency parsing systems, MaltParser (Nivre et al., 2007b) and MS TParser (McDonald et al., 2005b), representing different threads of research in data-driven machine learning approaches have obtained high publicity, for their state-of-the-art performances in open competitions such as CoNLL Shared Tasks.
Parser Domain Adaptation
Granted for the differences between their approaches, both systems heavily rely on machine learning methods to estimate the parsing model from an annotated corpus as training set.
Parser Domain Adaptation
Most of these approaches focused on the machine learning perspective instead of the linguistic knowledge embraced in the parsers.
machine learning is mentioned in 4 sentences in this paper.
Topics mentioned in this paper: