Index of papers in Proc. ACL 2010 that mention
  • natural language
Cheung, Jackie Chi Kit and Penn, Gerald
Abstract
One goal of natural language generation is to produce coherent text that presents information in a logical order.
Abstract
Then, we incorporate the model enhanced with topological fields into a natural language generation system that generates constituent orders for German text, and show that the added coherence component improves performance slightly, though not statistically significantly.
Introduction
Local coherence modelling has been shown to be useful for tasks like natural language generation and summarization, (Barzilay and Lee, 2004) and genre classification (Barzilay and Lapata, 2008).
Introduction
We then embed these topological field annotations into a natural language generation system to show the utility of local coherence information in an applied setting.
Introduction
Filippova and Strube (2007c) also examine the role of the VF in local coherence and natural language generation, focusing on the correlation between VFs and sentential topics.
natural language is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Wang, WenTing and Su, Jian and Tan, Chew Lim
Conclusions and Future Works
In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and
Conclusions and Future Works
Computational Natural Language Learning, pages 92—101.
Conclusions and Future Works
Convolution Kernels for Natural Language .
Introduction
The ability of recognizing such relations between text units including identifying and classifying provides important information to other natural language processing systems, such as language genenuknn docunnnu sunnnafizafion, and question answering.
natural language is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Heinz, Jeffrey
Abstract
Potential applications include learnable models for aspects of natural language and cognition.
Conclusion and open questions
For theoretical linguistics, it appears that the string extension function f = (LR13,P2), which defines a class of languages which obey restrictions on both contiguous subsequences of length 3 and on discontiguous subsequences of length 2, provides a good first approximation to the segmental phonotactic patterns in natural languages (Heinz, 2007).
Conclusion and open questions
Finally, since the stochastic counterpart of k:-SL class is the n-gram model, it is plausible that probabilistic string extension language classes can form the basis of new natural language processing techniques.
Introduction
One notable case is the Strictly Piecewise (SP) languages, which was originally motivated for two reasons: the leamability properties discussed here and its ability to describe long-distance dependencies in natural language phonology (Heinz, 2007; Heinz, to appear).
Introduction
Another example is the Strictly Local (SL) languages which are the categorical, symbolic version of n-gram models, which are widely used in natural language processing (Jurafsky and Martin, 2008).
Subregular examples
Heinz (2007,2009a) shows that consonantal harmony patterns in natural language are describable by such SP2 languages and hypothesizes that humans learn them in the way suggested by $3122.
natural language is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Vogel, Adam and Jurafsky, Daniel
Abstract
We present a system that learns to follow navigational natural language directions.
Approximate Dynamic Programming
We presented a reinforcement learning system which learns to interpret natural language directions.
Approximate Dynamic Programming
While our results are still preliminary, we believe our model represents a significant advance in learning natural language meaning, drawing its supervision from human demonstration rather than word distributions or hand-labeled semantic tags.
Reinforcement Learning Formulation
Learning exactly which words influence decision making is difficult; reinforcement learning algorithms have problems with the large, sparse feature vectors common in natural language processing.
Related Work
However, they do not learn these representations from text, leaving natural language processing as an open problem.
The Map Task Corpus
Additionally, the instruction giver has a path drawn on her map, and must communicate this path to the instruction follower in natural language .
natural language is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Feng, Yansong and Lapata, Mirella
Problem Formulation
Given an image I , and a related knowledge database K, create a natural language description C which captures the main content of the image under K. Specifically, in the news story scenario, we will generate a caption C for an image I and its accompanying document D. The training data thus consists of document-image-caption tu-
Related Work
The picture is first analyzed using image processing techniques into an abstract representation, which is then rendered into a natural language description with a text generation engine.
Related Work
They extract features of human motion and interleave them with a concept hierarchy of actions to create a case frame from which a natural language sentence is generated.
Related Work
Within natural language processing most previous efforts have focused on generating captions to accompany complex graphical presentations (Mittal et al., 1998; Corio and Lapalme, 1999; Fas-ciano and Lapalme, 2000; Feiner and McKeown, 1990) or on using the captions accompanying information graphics to infer their intended message, e.g., the author’s goal to convey ostensible increase or decrease of a quantity of interest (Elzer et al., 2005).
natural language is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Gómez-Rodr'iguez, Carlos and Nivre, Joakim
Determining Multiplanarity
Several constraints on non-projective dependency structures have been proposed recently that seek a good balance between parsing efficiency and coverage of non-projective phenomena present in natural language treebanks.
Determining Multiplanarity
However, we have found this not to be a problem when measuring multiplanarity in natural language treebanks, since the effective problem size can be reduced by noting that each connected component of the crossings graph can be treated separately, and that nodes that are not part of a cycle need not be considered.5 Given that non-projective sentences in natural language tend to have a small proportion of non-projective links (Nivre and Nilsson, 2005), the connected components of their crossings graphs are very small, and k-colourings for them can quickly be found by brute-force search.
Introduction
Dependency-based syntactic parsing has become a widely used technique in natural language processing, and many different parsing models have been proposed in recent years (Yamada and Matsumoto, 2003; Nivre et al., 2004; McDonald et al., 2005a; Titov and Henderson, 2007; Martins et al., 2009).
Preliminaries
Like context-free grammars, projective dependency trees are not sufficient to represent all the linguistic phenomena observed in natural languages , but they have the advantage of being efficiently parsable: their parsing problem can be solved in cubic time with chart parsing techniques (Eisner, 1996; Gomez-Rodriguez et al., 2008), while in the case of general non-projective dependency forests, it is only tractable under strong independence assumptions (McDonald et al., 2005b; McDonald and Satta, 2007).
natural language is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Kuhlmann, Marco and Koller, Alexander and Satta, Giorgio
Conclusion
Of course, at the end of the day, the issue that is more relevant to computational linguistics than a formalism’s ability to generate artificial languages such as L3 is how useful it is for modeling natural languages .
Conclusion
In this sense, our formal result can also be understood as a contribution to a discussion about the expressive power that is needed to model natural languages .
Introduction
It is well-known that CCG can generate languages that are not context-free (which is necessary to capture natural languages ), but can still be parsed in polynomial time.
Introduction
On the other hand, as pure multi-modal CCG has been successfully applied to model the syntax of a variety of natural languages, another way to read our results is as contributions to a discussion about the exact expressiveness needed to model natural language .
natural language is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Sammons, Mark and Vydiswaran, V.G.Vinod and Roth, Dan
Introduction
Much of the work in the field of Natural Language Processing is founded on an assumption of semantic compositionality: that there are identifiable, separable components of an unspecified inference process that will develop as research in NLP progresses.
Introduction
While many have (nearly) immediate application to real world tasks like search, many are also motivated by their potential contribution to more ambitious Natural Language tasks.
Introduction
But there is no clear process for identifying potential tasks (other than consensus by a sufficient number of researchers), nor for quantifying their potential contribution to existing NLP tasks, let alone to Natural Language Understanding.
natural language is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Beaufort, Richard and Roekhaut, Sophie and Cougnon, Louise-Amélie and Fairon, Cédrick
Abstract
In recent years, research in natural language processing has increasingly focused on normalizing SMS messages.
Introduction
Whatever their causes, these deviations considerably hamper any standard natural language processing (NLP) system, which stumbles against so many Out-Of-Vocabulary words.
The normalization models
In natural language processing, a word is commonly defined as “a sequence of alphabetic characters between separators”, and an IV word is simply a word that belongs to the lexicon in use.
natural language is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Hassan, Ahmed and Radev, Dragomir R.
Abstract
Automatically identifying the polarity of words is a very important task in Natural Language Processing.
Conclusions
Predicting the semantic orientation of words is a very interesting task in Natural Language Processing and it has a wide variety of applications.
Introduction
Identifying emotions and attitudes from unstructured text is a very important task in Natural Language Processing.
natural language is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Honnibal, Matthew and Curran, James R. and Bos, Johan
Background and motivation
Formalisms like HPSG (Pollard and Sag, 1994), LFG (Kaplan and Bresnan, 1982), and CCG (Steedman, 2000) are linguistically motivated in the sense that they attempt to explain and predict the limited variation found in the grammars of natural languages .
Conclusion
Research in natural language understanding is driven by the datasets that we have available.
Introduction
Progress in natural language processing relies on direct comparison on shared data, discouraging improvements to the evaluation data.
natural language is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Schmitz, Sylvain
Computational Complexity
Note that we only consider uniform membership, since grammars for natural languages are typically considerably larger than input sentences, and their influence can hardly be neglected.
Conclusion
A conclusion with a more immediate linguistic value is that MLIGs and UVG—dls hardly qualify as formalisms for mildly context-sensitive languages, claimed by Joshi (1985) to be adequate for modeling natural languages , and “roughly” defined as the extensions of context-free languages that display
Multiset-Valued Linear Indexed Grammars
Natural languages are known for displaying some limited cross-serial dependencies, as witnessed in linguistic analyses, e.g.
natural language is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Shutova, Ekaterina
Automatic Metaphor Interpretation
Their system, however, does not take natural language sentences as input, but logical expressions that are representations of small discourse fragments.
Conclusion and Future Directions
natural language computation, whereby manually crafted rules gradually give way to more robust corpus-based statistical methods.
Introduction
The use of metaphor is ubiquitous in natural language text and it is a serious bottleneck in automatic text understanding.
natural language is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Tomasoni, Mattia and Huang, Minlie
Introduction
cQA websites are becoming an increasingly popular complement to search engines: overnight, a user can expect a human-crafted, natural language answer tailored to her specific needs.
Related Work
(2009) with a system that makes use of semantic-aware Natural Language Preprocessing techniques.
The summarization framework
BEs are a strong theoretical instrument to tackle the ambiguity inherent in natural language that find successful practical applications in real-world query-based summarization systems.
natural language is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: