Index of papers in Proc. ACL 2014 that mention
  • natural language
Tian, Ran and Miyao, Yusuke and Matsuzaki, Takuya
Abstract
Dependency-based Compositional Semantics (DCS) is a framework of natural language semantics with easy-to-process structures as well as strict semantics.
Conclusion and Discussion
The pursue of a logic more suitable for natural language inference is not new.
Conclusion and Discussion
Much work has been done in mapping natural language into database queries (Cai and Yates, 2013; Kwiatkowski et al., 2013; Poon, 2013).
Conclusion and Discussion
can thus be considered as an attempt to characterize a fragment of FOL that is suited for both natural language inference and transparent syntax-semantics mapping, through the choice of operations and relations on sets.
Introduction
It is expressive enough to represent complex natural language queries on a relational database, yet simple enough to be latently learned from question-answer pairs.
The Idea
In this section we describe the idea of representing natural language semantics by DCS trees, and achieving inference by computing logical relations among the corresponding abstract denotations.
The Idea
DCS trees has been proposed to represent natural language semantics with a structure similar to dependency trees (Liang et al., 2011) (Figure 1).
The Idea
2.4.1 Natural language to DCS trees
natural language is mentioned in 10 sentences in this paper.
Topics mentioned in this paper:
Berant, Jonathan and Liang, Percy
Abstract
Given an input utterance, we first use a simple method to deterministically generate a set of candidate logical forms with a canonical realization in natural language for each.
Canonical utterance construction
Utterance generation While mapping general language utterances to logical forms is hard, we observe that it is much easier to generate a canonical natural language utterances of our choice given a logical form.
Discussion
use a KB over natural language extractions rather than a formal KB and so querying the KB does not require a generation step — they paraphrase questions to KB entries directly.
Introduction
We consider the semantic parsing problem of mapping natural language utterances into logical forms to be executed on a knowledge base (KB) (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Kwiatkowski et al., 2010).
Introduction
Semantic parsers need to somehow associate natural language phrases with logical predicates, e.g., they must learn that the constructions “What
Introduction
To learn these map-tings, traditional semantic parsers use data which airs natural language with the KB.
Model overview
date logical forms Zac, and then for each 2 E 236 generate a small set of canonical natural language utterances Cz.
Model overview
Second, natural language utterances often do not express predicates explicitly, e. g., the question “What is Italy’s money?” expresses the binary predicate CurrencyOf with a possessive construction.
Model overview
Our framework accommodates any paraphrasing method, and in this paper we propose an association model that learns to associate natural language phrases that co-occur frequently in a monolingual parallel corpus, combined with a vector space model, which learns to score the similarity between vector representations of natural language utterances (Section 5).
natural language is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Pasupat, Panupong and Liang, Percy
Abstract
In this paper, we consider a new zero-shot learning task of extracting entities specified by a natural language query (in place of seeds) given only a single web page.
Discussion
In our case, we only have the natural language query, which presents the more difficult problem of associating the entity class in the query (e.g., hiking trails) to concrete entities (e.g., Avalon Super Loop).
Discussion
Another related line of work is information extraction from text, which relies on natural language patterns to extract categories and relations of entities.
Discussion
In future work, we would like to explore the issue of compositionality in queries by aligning linguistic structures in natural language with the relative position of entities on web pages.
Introduction
In this paper, we propose a novel task, zero-shot entity extraction, where the specification of the desired entities is provided as a natural language query.
Introduction
In our setting, we take as input a natural language query and extract entities from a single web page.
Introduction
For evaluation, we created the OPENWEB dataset comprising natural language queries from the Google Suggest API and diverse web pages returned from web search.
Problem statement
We define the zero-shot entity extraction task as follows: let at be a natural language query (e.g., hiking trails near Baltimore), and to be a web page.
natural language is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Yao, Xuchen and Van Durme, Benjamin
Abstract
Answering natural language questions using the Freebase knowledge base has recently been explored as a platform for advancing the state of the art in open domain semantic parsing.
Approach
One challenge for natural language querying against a KB is the relative informality of queries as compared to the grammar of a KB.
Conclusion
To compensate for the problem of domain mismatch or overfitting, we exploited ClueWeb, mined mappings between KB relations and natural language text, and showed that it helped both relation prediction and answer extraction.
Graph Features
However, most Freebase relations are framed in a way that is not commonly addressed in natural language questions.
Introduction
Question answering (QA) from a knowledge base (KB) has a long history within natural language processing, going back to the 1960s and 1970s, with systems such as Baseball (Green Jr et al., 1961) and Lunar (Woods, 1977).
Introduction
These systems were limited to closed-domains due to a lack of knowledge resources, computing power, and ability to robustly understand natural language .
natural language is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Kushman, Nate and Artzi, Yoav and Zettlemoyer, Luke and Barzilay, Regina
Conclusion
Eventually, we hope to extend the techniques to synthesize even more complex structures, such as computer programs, from natural language .
Experimental Setup
As the questions are posted to a web forum, the posts often contained additional comments which were not part of the word problems and the solutions are embedded in long freeform natural language descriptions.
Mapping Word Problems to Equations
This allows for a tighter mapping between the natural language and the system template, where the words aligned to the first equation in the template come from the first two sentences, and the words aligned to the second equation come from the third.
Model Details
Document level features Oftentimes the natural language in ac will contain words or phrases which are indicative of a certain template, but are not associated with any of the words aligned to slots in the template.
Model Details
Single Slot Features The natural language cc always contains one or more questions or commands indicating the queried quantities.
Related Work
Situated Semantic Interpretation There is a large body of research on learning to map natural language to formal meaning representations, given varied forms of supervision.
natural language is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Bhat, Suma and Xue, Huichao and Yoon, Su-Youn
Abstract
Second, the measure makes sense theoretically, both from algorithmic and native language acquisition points of view.
Conclusions
We also make an interesting observation that the impressionistic evaluation of syntactic complexity is better approximated by the presence or absence of grammar and usage patterns (and not by their frequency of occurrence), an idea supported by studies in native language acquisition.
Discussions
Studies in native language acquisition, have considered multiple grammatical developmental indices that represent the grammatical levels reached at various stages of language acquisition.
Introduction
0 In the domain of native language acquisition, the presence or absence of a grammatical structure indicates grammatical development.
Models for Measuring Grammatical Competence
The inductive classifier we use here is the maximum-entropy model (MaxEnt) which has been used to solve several statistical natural language processing problems with much success (Berger et al., 1996; Borthwick et al., 1998; Borthwick, 1999; Pang et al., 2002; Klein et al., 2003; Rosenfeld, 2005).
natural language is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Liu, Shujie and Yang, Nan and Li, Mu and Zhou, Ming
Conclusion and Future Work
We will apply our proposed R2NN to other tree structure learning tasks, such as natural language parsing.
Introduction
Applying DNN to natural language processing (NLP), representation or embedding of words is usually learnt first.
Introduction
Recursive neural networks, which have the ability to generate a tree structured output, are applied to natural language parsing (Socher et al., 2011), and they are extended to recursive neural tensor networks to explore the compositional aspect of semantics (Socher et al., 2013).
Our Model
To generate a tree structure, recursive neural networks are introduced for natural language parsing (Socher et al., 2011).
Our Model
For example, for nature language parsing, sum] is the representation of the parent node, which could be a NP or VP node, and it is also the representation of the whole subtree covering from Z to n .
natural language is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
McKinley, Nathan and Ray, Soumya
Conclusion
We have proposed STRUCT, a general-purpose natural language generation system which is comparable to current state-of-the-art generators.
Introduction
Natural language generation (NLG) develops techniques to extend similar capabilities to automated systems.
Introduction
to Natural Language Generation
Related Work
This is then used by a surface realization module which encodes the enriched semantic representation into natural language .
Sentence Tree Realization with UCT
In the MDP we use for NLG, we must define each element of the tuple in such a way that a plan in the MDP becomes a sentence in a natural language .
natural language is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Zou, Bowei and Zhou, Guodong and Zhu, Qiaoming
Abstract
Negative expressions are common in natural language text and play a critical role in information extraction.
Introduction
Negation expressions are common in natural language text.
Related Work
Horn, 1989; van der Wouden, 1997), and there were only a few in natural language processing with focus on negation recognition in the biomedical domain.
Related Work
Due to the increasing demand on deep understanding of natural language text, negation recognition has been drawing more and more attention in recent years, with a series of shared tasks and workshops, however, with focus on cue detection and scope resolution, such as the BioNLP 2009 shared task for negative event detection (Kim et al., 2009) and the ACL 2010 Workshop for scope resolution of negation and speculation (Morante and Sporleder, 2010), followed by a special issue of Computational Linguistics (Morante and Sporleder, 2012) for modality and negation.
natural language is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Silberer, Carina and Lapata, Mirella
Autoencoders for Grounded Semantics
As our input consists of natural language attributes, the model would infer textual attributes given visual attributes and vice versa.
Conclusions
The two modalities are encoded as vectors of natural language attributes and are obtained automatically from decoupled text and image data.
Introduction
Recent years have seen a surge of interest in single word vector spaces (Turney and Pantel, 2010; Collobert et al., 2011; Mikolov et al., 2013) and their successful use in many natural language applications.
Related Work
The visual and textual modalities on which our model is trained are decoupled in that they are not derived from the same corpus (we would expect co-occurring images and text to correlate to some extent) but unified in their representation by natural language attributes.
natural language is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Nakashole, Ndapandula and Mitchell, Tom M.
Fact Candidates
Natural language is diverse.
Introduction
Information extraction projects aim to distill relational facts from natural language text (Auer et al., 2007; Bollacker et al,2008;(knlynietal,2010;Faderetal,2011; Nakashole et al., 2011; Del Corro and Gemulla, 2013).
Introduction
However, such scores are often tied to the extractor’s ability to read and understand natural language text.
Related Work
The focus is on understanding natural language , including the use of negation.
natural language is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Flanigan, Jeffrey and Thomson, Sam and Carbonell, Jaime and Dyer, Chris and Smith, Noah A.
Introduction
Semantic parsing is the problem of mapping natural language strings into meaning representations.
Introduction
Although it does not encode quantifiers, tense, or modality, the set of semantic phenomena included in AMR were selected with natural language applications—in particular, machine translation—in mind.
Related Work
While all semantic parsers aim to transform natural language text to a formal representation of its meaning, there is wide variation in the meaning representations and parsing techniques used.
Related Work
Natural language interfaces for querying databases have served as another driving application (Zelle and Mooney, 1996; Kate et al., 2005; Liang et al., 2011, inter alia).
natural language is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Paperno, Denis and Pham, Nghia The and Baroni, Marco
Compositional distributional semantics
This underlying intuition, adopted from formal semantics of natural language , motivated the creation of the lexical function model of composition (lf) (Baroni and Zamparelli, 2010; Co-ecke et al., 2010).
Compositional distributional semantics
The lf model can be seen as a projection of the symbolic Montagovian approach to semantic composition in natural language onto the domain of vector spaces and linear operations on them (Baroni et al., 2013).
Compositional distributional semantics
The full range of semantic types required for natural language processing, including those of adverbs and transitive verbs, has to include, however, tensors of greater rank.
natural language is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Nguyen, Minh Luan and Tsang, Ivor W. and Chai, Kian Ming A. and Chieu, Hai Leong
Problem Statement
Given a pair of entities (A,B) in S, the first step is to express the relation between A and B with some feature representation using a feature extraction scheme x. Lexical or syntactic patterns have been successfully used in numerous natural language processing tasks, including relation extraction.
Problem Statement
Each node is augmented with relevant part-of-speech (POS) using the Python Natural Language Processing Tool Kit.
Robust Domain Adaptation
Because not-a-relation is a background or default relation type in the relation classification task, and because it has rather high variation when manifested in natural language , we have found it difficult to obtain a distance metric W that allows the not-a-relation samples to form clusters naturally using transductive inference.
natural language is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Mitra, Sunny and Mitra, Ritwik and Riedl, Martin and Biemann, Chris and Mukherjee, Animesh and Goyal, Pawan
Introduction
Two of the fundamental components of a natural language communication are word sense discovery (Jones, 1986) and word sense disambiguation (Ide and Veronis, 1998).
Introduction
two aspects are not only important from the perspective of developing computer applications for natural languages but also form the key components of language evolution and change.
Related work
Word sense disambiguation as well as word sense discovery have both remained key areas of research right from the very early initiatives in natural language processing research.
natural language is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
van Gompel, Maarten and van den Bosch, Antal
Abstract
We describe a system capable of translating native language (L1) fragments to foreign language (L2) fragments in an L2 context.
Abstract
The type of translation assistance system under investigation here encourages language learners to write in their target language while allowing them to fall back to their native language in case the correct word or expression is not known.
Introduction
Whereas machine translation generally concerns the translation of whole sentences or texts from one language to the other, this study focusses on the translation of native language (henceforth L1) words and phrases, i.e.
natural language is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Gyawali, Bikash and Gardent, Claire
Related Work
In all these approaches, grammar and lexicon are developed manually and it is assumed that the lexicon associates semantic sub-formulae with natural language expressions.
Related Work
As discussed in (Power and Third, 2010), one important limitation of these approaches is that they assume a simple deterministic mapping between knowledge representation languages and some controlled natural language (CNL).
Related Work
(Lu and Ng, 2011) focuses on generating natural language sentences from logical form (i.e., lambda terms) using a synchronous context-free grammar.
natural language is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: