Index of papers in Proc. ACL 2008 that mention
  • natural language
Arnold, Andrew and Nallapati, Ramesh and Cohen, William W.
Abstract
We present a novel hierarchical prior structure for supervised transfer learning in named entity recognition, motivated by the common structure of feature spaces for this task across natural language data sets.
Introduction
In particular, we develop a novel prior for named entity recognition that exploits the hierarchical feature space often found in natural language domains (§l.2) and allows for the transfer of information from labeled datasets in other domains (§l.3).
Introduction
Representing feature spaces with this kind of tree, besides often coinciding with the explicit language used by common natural language toolkits (Cohen, 2004), has the added benefit of allowing a model to easily back-off, or smooth, to decreasing levels of specificity.
Investigation
We used a standard natural language toolkit (Cohen, 2004) to compute tens of thousands of binary features on each of these tokens, encoding such information as capitalization patterns and contextual information from surrounding words.
Models considered 2.1 Basic Conditional Random Fields
In this work, we will base our work on Conditional Random Fields (CRF’s) (Lafferty et al., 2001), which are now one of the most preferred sequential models for many natural language processing tasks.
natural language is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Kaisser, Michael and Hearst, Marti A. and Lowe, John B.
Abstract
These findings have important implications for search results presentation, especially for natural language queries.
Study Goals
There are a disproportionally large number of natural language queries in this set compared with query sets from typical keyword engines.
Study Goals
Such queries are often complete questions and are sometimes grammatical fragments (e.g., “date of next US election”) and so are likely to be amenable to interesting natural language processing algorithms, which is an area of in-
Study Goals
This is substantially longer than the current average for web search query, which was approximately 2.8 in 2005 (Jansen et al., 2007); this is due to the existence of natural language queries.
natural language is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Kaufmann, Tobias and Pfister, Beat
Conclusions and Outlook
It is a well-known fact that natural language is highly ambiguous: a correct and seemingly unambiguous sentence may have an enormous number of readings.
Introduction
It has repeatedly been pointed out that N-grams model natural language only superficially: an Nth-order Markov chain is a very crude model of the complex dependencies between words in an utterance.
Introduction
More accurate statistical models of natural language have mainly been developed in the field of statistical parsing, e.g.
Introduction
On the other hand, they are not suited to reliably decide on the grammaticality of a given phrase, as they do not accurately model the linguistic constraints inherent in natural language .
natural language is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
HaCohen-Kerner, Yaakov and Kass, Ariel and Peretz, Ariel
Abbreviation Disambiguation
This hypothesis states that natural languages tend to use consistent spoken and written styles.
Abbreviation Disambiguation
hypothesis assumes that in natural languages , there is a tendency for an author to be consistent in the same discourse or article.
Introduction
The proposed system, preserves its portability between languages and domains because it does not use any natural language processing (NLP) subsystem (e.g.
natural language is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Lee, Cheongjae and Jung, Sangkeun and Lee, Gary Geunbae
Example-based Dialog Modeling
The EBDM framework is a simple and powerful approach to rapidly develop natural language interfaces for multi-domain dialog processing (Lee et al., 2006b).
Introduction
Since the performance in human language technologies such as Automatic Speech Recognition (ASR) and Natural Language Understanding (NLU)1 have been improved, this advance has made it possible to develop spoken dialog systems for many different application domains.
Introduction
1Through this paper, we will use the term natural language to include both spoken language and written language
natural language is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Mairesse, François and Walker, Marilyn
Conclusion
We present a new method for generating linguistic variation projecting multiple personality traits continuously, by combining and extending previous research in statistical natural language generation (Paiva and Evans, 2005; Rambow et al., 2001; Isard et al., 2006; Mairesse and Walker, 2007).
Introduction
Over the last 20 years, statistical language models (SLMs) have been used successfully in many tasks in natural language processing, and the data available for modeling has steadily grown (Lapata and Keller, 2005).
Introduction
Langkilde and Knight (1998) first applied SLMs to statistical natural language generation (SNLG), showing that high quality paraphrases can be generated from an underspecified representation of meaning, by first applying a very undercon-strained, rule-based overgeneration phase, whose outputs are then ranked by an SLM scoring phase.
natural language is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Szarvas, Gy"orgy
Introduction
In various natural language processing tasks, relevant statements appearing in a speculative context are treated as false positives.
Introduction
(Hyland, 1994)), speculative language from a Natural Language Processing perspective has only been studied in the past few years.
Methods
What makes this iterative method efficient is that, as we said earlier, hedging is expressed via keywords in natural language texts; and often several keywords are present in a single sentence.
natural language is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Wang, Qin Iris and Schuurmans, Dale and Lin, Dekang
Conclusion and Future Work
Another direction is to apply the semi-supervised algorithm to other natural language problems, such as machine translation, topic segmentation and chunking.
Introduction
Unfortunately, although significant recent progress has been made in the area of semi-supervised learning, the performance of semi-supervised learning algorithms still fall far short of expectations, particularly in challenging real-world tasks such as natural language parsing or machine translation.
Introduction
plied to several natural language processing tasks (Yarowsky, 1995; Charniak, 1997; Steedman et al., 2003).
natural language is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: