Index of papers in Proc. ACL that mention
  • meaning representation
Silberer, Carina and Lapata, Mirella
Autoencoders for Grounded Semantics
Our model learns higher-level meaning representations for single words from textual and visual input in a joint fashion.
Autoencoders for Grounded Semantics
To learn meaning representations of single words from textual and visual input, we employ stacked (denoising) autoencoders (SAEs).
Autoencoders for Grounded Semantics
Then, we join these two SAEs by feeding their respective second coding simultaneously to another autoencoder, whose hidden layer thus yields the fused meaning representation .
Conclusions
In this paper, we presented a model that uses stacked autoencoders to learn grounded meaning representations by simultaneously combining textual and Visual modalities.
Experimental Setup
We learn meaning representations for the nouns contained in McRae et al.’s (2005) feature norms.
Experimental Setup
We used the model described above and the meaning representations obtained from the output of the bimodal latent layer for all the evaluation tasks detailed below.
Introduction
Despite differences in formulation, most existing models conceptualize the problem of meaning representation as one of learning from multiple views corresponding to different modalities.
Introduction
In this work, we introduce a model, illustrated in Figure 1, which learns grounded meaning representations by mapping words and images into a common embedding space.
Introduction
Unlike most previous work, our model is defined at a finer level of granularity — it computes meaning representations for individual words and is unique in its use of attributes as a means of representing the textual and visual modalities.
Related Work
The use of stacked autoencoders to extract a shared lexical meaning representation is new to our knowledge, although, as we explain below related to a large body of work on deep learning.
meaning representation is mentioned in 11 sentences in this paper.
Topics mentioned in this paper:
Zarriess, Sina and Cahill, Aoife and Kuhn, Jonas
Experimental Setup
The resulting f-structure parses are transferred to meaning representations and mapped back to f-structure charts.
Experimental Setup
In our system, this correlation is modelled by a combination of linguistic properties that can be extracted from the f-structure or meaning representation and of the surface order that is read off the sentence string.
Experiments
We built 3 datasets from our alternation data: FS - candidates generated from the f-structure; SEMn - realisations from the naive meaning representations; SEMh - candidates from the heuristically underspecified meaning representation .
Generation Architecture
To obtain a more abstract underlying representation (in the pipeline on the right-hand side of Figure 1), the present work uses an additional semantic construction component (Crouch and King, 2006; ZarrieB, 2009) to map LFG f-structures to meaning representations .
Generation Architecture
For the reverse direction, the meaning representations are mapped to f-structures which can then be mapped to surface strings by the XLE generator (ZarrieB and Kuhn, 2010).
Generation Architecture
The subject of the passive and the object of the active f-structure are mapped to the same role (patient) in the meaning representation .
meaning representation is mentioned in 13 sentences in this paper.
Topics mentioned in this paper:
Poon, Hoifung
Background
The goal of semantic parsing is to map text to a complete and detailed meaning representation (Mooney, 2007).
Background
The standard language for meaning representation is first-order logic or a sublanguage, such as FunQL (Kate et al., 2005; Clarke et al., 2010) and lambda calculus (Zettlemoyer and Collins, 2005; Zettlemoyer and Collins, 2007).
Background
Poon & Domingos (2009, 2010) induce a meaning representation by clustering synonymous lambda-calculus forms stemming from partitions of dependency trees.
Conclusion
This paper introduces grounded unsupervised semantic parsing, which leverages available database for indirect supervision and uses a grounded meaning representation to account for syntax-semantics mismatch in dependency-based semantic parsing.
Grounded Unsupervised Semantic Parsing
To combat this problem, GUSP introduces a novel dependency-based meaning representation with an augmented state space to account for semantic relations that are nonlocal in the dependency tree.
Grounded Unsupervised Semantic Parsing
However, GUSP uses a different meaning representation defined over individual nodes and edges, rather than partitions, which enables linear-time exact inference.
Grounded Unsupervised Semantic Parsing
Their approach alleviates some complexity in the meaning representation for handling syntax-semantics mismatch, but it has to search over a much larger search space involving exponentially many candidate trees.
Introduction
Semantic parsing maps text to a formal meaning representation such as logical forms or structured queries.
Introduction
To handle syntax-semantics mismatch, GUSP introduces a novel dependency-based meaning representation
meaning representation is mentioned in 11 sentences in this paper.
Topics mentioned in this paper:
Titov, Ivan and Kozhevnikov, Mikhail
A Model of Semantics
tradiction is trivial: two meaning representations
A Model of Semantics
As soon as the meaning representations m* are inferred, we find ourselves in the setup studied in (Liang et al., 2009): the state 3 is no longer latent and we can run efficient inference on the E—step.
Inference with NonContradictory Documents
In this section we will describe our inference method on a higher conceptual level, not specifying the underlying meaning representation and the probabilistic model.
Inference with NonContradictory Documents
garded as defining the probability distribution of meaning m and its alignment a with the given text w, P(m, a, w) = P(a, The semantics m can be represented either as a logical formula (see, e.g., (Poon and Domingos, 2009)) or as a set of field values if database records are used as a meaning representation (Liang et al., 2009).
Inference with NonContradictory Documents
ings (m1,..., m K) that Ailmi is not satisfiable,2 and models dependencies between components in the composite meaning representation (e.g., arguments values of predicates).
Introduction
The supervision was either given in the form of meaning representations aligned with sentences (Zettlemoyer and Collins, 2005; Ge and Mooney, 2005; Mooney, 2007) or in a somewhat more relaxed form, such as lists of candidate meanings for each sentence (Kate and Mooney, 2007; Chen and Mooney, 2008) or formal representations of the described world state for each text (Liang et al., 2009).
Introduction
However, it is important to note that the phrase “wind from west” may still appear in the texts, but in reference to other time periods, underlying the need for modeling alignment between grouped texts and their latent meaning representation .
meaning representation is mentioned in 10 sentences in this paper.
Topics mentioned in this paper:
Packard, Woodley and Bender, Emily M. and Read, Jonathon and Oepen, Stephan and Dridan, Rebecca
Abstract
derives the notion of negation scope assumed in this task from the structure of logical-form meaning representations .
Conclusion and Outlook
(2011), on the one hand, and the broad-coverage, MRS meaning representations of the ERG, on the other hand.
Conclusion and Outlook
Unlike the rather complex top-performing systems from the original 2012 competition, our MRS Crawler is defined by a small set of general rules that operate over general-purpose, explicit meaning representations .
Introduction
Our system implements these findings through a notion of functor-argument ‘crawling’, using as our starting point the underspecified logical-form meaning representations provided by a general-purpose deep parser.
System Description
This system operates over the normalized semantic representations provided by the LinGO English Resource Grammar (ERG; Flickinger, 2000).3 The ERG maps surface strings to meaning representations in the format of Minimal Recursion Semantics (MRS; Copestake et al., 2005).
System Description
5 In other words, a possible semantic interpretation of the (string-based) Shared Task annotation guidelines and data is in terms of a quantifier-free approach to meaning representation , or in terms of one where quantifier scope need not be made explicit (as once suggested by, among others, Alshawi, 1992).
meaning representation is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Lee, Kenton and Artzi, Yoav and Dodge, Jesse and Zettlemoyer, Luke
Abstract
We use a Combinatory Categorial Grammar to construct compositional meaning representations , while considering contextual cues, such as the document creation time and the tense of the governing verb, to compute the final time values.
Conclusion
Both models used a Combinatory Categorial Grammar (CCG) to construct a set of possible temporal meaning representations .
Formal Overview
For both tasks, we define the space of possible compositional meaning representations Z, where each 2 E Z defines a unique time expression 6.
Introduction
For both tasks, we make use of a hand-engineered Combinatory Categorial Grammar (CCG) to construct a set of meaning representations that identify the time being described.
Introduction
For example, this grammar maps the phrase “2nd Friday of July” to the meaning representation intersect(nth(2,friday),july), which encodes the set of all such days.
Related Work
We build on a number of existing algorithmic ideas, including using CCGs to build meaning representations (Zettlemoyer and Collins, 2005; Zettlemoyer and Collins, 2007; Kwiatkowski et al., 2010; Kwiatkowski et al., 2011), building derivations to transform the output of the CCG parser based on context (Zettlemoyer and Collins, 2009), and using weakly supervised parameter updates (Artzi and Zettlemoyer, 2011; Artzi and Zettlemoyer, 2013b).
meaning representation is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Andreas, Jacob and Vlachos, Andreas and Clark, Stephen
Abstract
Semantic parsing is the problem of deriving a structured meaning representation from a natural language utterance.
Conclusions
We have presented a semantic parser which uses techniques from machine translation to learn mappings from natural language to variable-free meaning representations .
Introduction
Semantic parsing (SP) is the problem of transforming a natural language (NL) utterance into a machine-interpretable meaning representation (MR).
Introduction
At least superficially, SP is simply a machine translation (MT) task: we transform an NL utterance in one language into a statement of another (unnatural) meaning representation language (MRL).
MT—based semantic parsing
Linearization We assume that the MRL is variable-free (that is, the meaning representation for each utterance is tree-shaped), noting that for-malisms with variables, like the A-calculus, can be mapped onto variable-free logical forms with combinatory logics (Curry et al., 1980).
Related Work
Other work which generalizes from variable-free meaning representations to A-calculus expressions includes the natural language generation procedure described by Lu and Ng (2011).
meaning representation is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Yao, Xuchen and Van Durme, Benjamin
Abstract
Those efforts map questions to sophisticated meaning representations that are then attempted to be matched against Viable answer candidates in the knowledge base.
Background
The model challenge involves finding the best meaning representation for the question, converting it into a query and executing the query on the KB.
Background
More recent research started to minimize this direct supervision by using latent meaning representations (Berant et
Background
We instead attack the problem of QA from a KB from an IE perspective: we learn directly the pattern of QA pairs, represented by the dependency parse of questions and the Freebase structure of answer candidates, without the use of intermediate, general purpose meaning representations .
Introduction
Typically questions are converted into some meaning representation (e. g., the lambda calculus), then mapped to database queries.
meaning representation is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Yu, Haonan and Siskind, Jeffrey Mark
Conclusion
The experiment shows that it can correctly learn the meaning representations in terms of HMM parameters for our lexical entries, from highly ambiguous training data.
Introduction
Language is grounded by mapping words, phrases, and sentences to meaning representations referring to the world.
Introduction
Dominey and Boucher (2005) paired narrated sentences with symbolic representations of their meanings, automatically extracted from video, to learn object names, spatial-relation terms, and event names as a mapping from the grammatical structure of a sentence to the semantic structure of the associated meaning representation .
Introduction
Chen and Mooney (2008) learned the language of sportscasting by determining the mapping between game commentaries and the meaning representations output by a rule-based simulation of the game.
meaning representation is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Chen, David
Abstract
We show that by changing the grammar of the formal meaning representation language and training on additional data collected from Amazon’s Mechanical Turk we can further improve the results.
Introduction
Building a lexicon of the formal meaning representations of words and phrases, either implicitly or explicitly, is usually an important step in inferring the meanings of entire sentences.
Introduction
In addition to the new lexicon learning algorithm, we also look at modifying the meaning representation grammar (MRG) for their formal semantic language.
Online Lexicon Learning Algorithm
3.1 Changing the Meaning Representation Grammar
meaning representation is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Thater, Stefan and Fürstenau, Hagen and Pinkal, Manfred
Introduction
We go one step further, however, in that we employ syntactically enriched vector models as the basic meaning representations , assuming a vector space spanned by combinations of dependency relations and words (Lin, 1998).
Related Work
Mitchell and Lapata (2008), henceforth M&L, propose a general framework in which meaning representations for complex expressions are computed compositionally by combining the vector representations of the individual words of the complex expression.
The model
As soon as we want to compute a meaning representation for a phrase like acquire knowledge from the verb acquire together with its direct object knowledge, we are facing the problem that verbs have different syntactic neighbors than nouns, hence their first-order vectors are not easily comparable.
The model
We let the first-order vector with its selectional preference information act as a kind of weighting filter on the second-order vector, and thus refine the meaning representation of the verb.
meaning representation is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Flanigan, Jeffrey and Thomson, Sam and Carbonell, Jaime and Dyer, Chris and Smith, Noah A.
Abstract
Abstract Meaning Representation (AMR) is a semantic formalism for which a growing set of annotated examples is available.
Introduction
Semantic parsing is the problem of mapping natural language strings into meaning representations .
Introduction
Abstract Meaning Representation (AMR) (Banarescu et al., 2013; Dorr et al., 1998) is a semantic formalism in which the meaning of a sentence is encoded as a rooted, directed, acyclic graph.
Related Work
While all semantic parsers aim to transform natural language text to a formal representation of its meaning, there is wide variation in the meaning representations and parsing techniques used.
meaning representation is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Mitchell, Jeff and Lapata, Mirella and Demberg, Vera and Keller, Frank
Introduction
The latter creates meaning representations compositionally, and therefore builds semantic expectations for word sequences (e. g., phrases, sentences, even documents) rather than isolated words.
Models of Processing Difficulty
To give a concrete example, Latent Semantic Analysis (LSA, Landauer and Dumais 1997) creates a meaning representation for words by constructing a word-document co-occurrence matrix from a large collection of documents.
Models of Processing Difficulty
Their aim is not so much to model processing difficulty, but to construct vector-based meaning representations that go beyond individual words.
Models of Processing Difficulty
We also examine the influence of the underlying meaning representations by comparing a simple semantic space similar to McDonald (2000) against Latent Dirichlet Allocation (Blei et al.
meaning representation is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Liang, Percy and Jordan, Michael and Klein, Dan
Abstract
To deal with the high degree of ambiguity present in this setting, we present a generative model that simultaneously segments the text into utterances and maps each utterance to a meaning representation grounded in the world state.
Generative Model
Think of the words spanned by a record as constituting an utterance with a meaning representation given by the record and subset of fields chosen.
Introduction
Recent work in learning semantics has focused on mapping sentences to meaning representations (e.g., some logical form) given aligned sen-tence/meaning pairs as training data (Ge and Mooney, 2005; Zettlemoyer and Collins, 2005; Zettlemoyer and Collins, 2007; Lu et al., 2008).
Introduction
In this less restricted data setting, we must resolve multiple ambiguities: (l) the segmentation of the text into utterances; (2) the identification of relevant facts, i.e., the choice of records and aspects of those records; and (3) the alignment of utterances to facts (facts are the meaning representations of the utterances).
meaning representation is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Gyawali, Bikash and Gardent, Claire
Introduction
To evaluate our approach, we use the benchmark provided by the KBGen challenge (Banik et al., 2012; Banik et al., 2013), a challenge designed to evaluate generation from knowledge bases; where the input is a KB subset; and where the expected output is a complex sentence conveying the meaning represented by the input.
Related Work
(Wong and Mooney, 2007) uses synchronous grammars to transform a variable free tree structured meaning representation into sentences.
Related Work
dom Field to generate from the same meaning representations .
meaning representation is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Abney, Steven and Bird, Steven
Human Language Project
sus on parse trees is difficult, obtaining consensus on meaning representations is impossible.
Human Language Project
However, if the language under consideration is anything other than English, then a translation into English (or some other reference language) is for most purposes a perfectly adequate meaning representation .
Human Language Project
Taking sentences in a reference language as the meaning representation , we arrive back at machine translation as the measure of success.
meaning representation is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Riezler, Stefan and Simianer, Patrick and Haas, Carolin
Grounding SMT in Semantic Parsing
Embedding SMT in a semantic parsing scenario means to define translation quality by the ability of a semantic parser to construct a meaning representation from the translated query, which returns the correct answer when executed against the database.
Related Work
For example, in semantic parsing, the learning goal is to produce and successfully execute a meaning representation .
Response-based Online Learning
(2010) or Goldwasser and Roth (2013) describe a response-driven learning framework for the area of semantic parsing: Here a meaning representation is “tried out” by itera-tively generating system outputs, receiving feedback from world interaction, and updating the model parameters.
meaning representation is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Ge, Ruifang and Mooney, Raymond
Experimental Evaluation
Note the results for SCISSOR, KRISP and LU on GEOQUERY are based on a different meaning representation language, FUNQL, which has been shown to produce lower results (Wong and Mooney, 2007).
Introduction
Semantic parsing is the task of mapping a natural language (NL) sentence into a completely formal meaning representation (MR) or logical form.
Introduction
A meaning representation language (MRL) is a formal unambiguous language that supports automated inference, such as first-order predicate logic.
meaning representation is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: