Index of papers in Proc. ACL 2014 that mention
  • meaning representations
Silberer, Carina and Lapata, Mirella
Autoencoders for Grounded Semantics
Our model learns higher-level meaning representations for single words from textual and visual input in a joint fashion.
Autoencoders for Grounded Semantics
To learn meaning representations of single words from textual and visual input, we employ stacked (denoising) autoencoders (SAEs).
Autoencoders for Grounded Semantics
Then, we join these two SAEs by feeding their respective second coding simultaneously to another autoencoder, whose hidden layer thus yields the fused meaning representation .
Conclusions
In this paper, we presented a model that uses stacked autoencoders to learn grounded meaning representations by simultaneously combining textual and Visual modalities.
Experimental Setup
We learn meaning representations for the nouns contained in McRae et al.’s (2005) feature norms.
Experimental Setup
We used the model described above and the meaning representations obtained from the output of the bimodal latent layer for all the evaluation tasks detailed below.
Introduction
Despite differences in formulation, most existing models conceptualize the problem of meaning representation as one of learning from multiple views corresponding to different modalities.
Introduction
In this work, we introduce a model, illustrated in Figure 1, which learns grounded meaning representations by mapping words and images into a common embedding space.
Introduction
Unlike most previous work, our model is defined at a finer level of granularity — it computes meaning representations for individual words and is unique in its use of attributes as a means of representing the textual and visual modalities.
Related Work
The use of stacked autoencoders to extract a shared lexical meaning representation is new to our knowledge, although, as we explain below related to a large body of work on deep learning.
meaning representations is mentioned in 11 sentences in this paper.
Topics mentioned in this paper:
Lee, Kenton and Artzi, Yoav and Dodge, Jesse and Zettlemoyer, Luke
Abstract
We use a Combinatory Categorial Grammar to construct compositional meaning representations , while considering contextual cues, such as the document creation time and the tense of the governing verb, to compute the final time values.
Conclusion
Both models used a Combinatory Categorial Grammar (CCG) to construct a set of possible temporal meaning representations .
Formal Overview
For both tasks, we define the space of possible compositional meaning representations Z, where each 2 E Z defines a unique time expression 6.
Introduction
For both tasks, we make use of a hand-engineered Combinatory Categorial Grammar (CCG) to construct a set of meaning representations that identify the time being described.
Introduction
For example, this grammar maps the phrase “2nd Friday of July” to the meaning representation intersect(nth(2,friday),july), which encodes the set of all such days.
Related Work
We build on a number of existing algorithmic ideas, including using CCGs to build meaning representations (Zettlemoyer and Collins, 2005; Zettlemoyer and Collins, 2007; Kwiatkowski et al., 2010; Kwiatkowski et al., 2011), building derivations to transform the output of the CCG parser based on context (Zettlemoyer and Collins, 2009), and using weakly supervised parameter updates (Artzi and Zettlemoyer, 2011; Artzi and Zettlemoyer, 2013b).
meaning representations is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Packard, Woodley and Bender, Emily M. and Read, Jonathon and Oepen, Stephan and Dridan, Rebecca
Abstract
derives the notion of negation scope assumed in this task from the structure of logical-form meaning representations .
Conclusion and Outlook
(2011), on the one hand, and the broad-coverage, MRS meaning representations of the ERG, on the other hand.
Conclusion and Outlook
Unlike the rather complex top-performing systems from the original 2012 competition, our MRS Crawler is defined by a small set of general rules that operate over general-purpose, explicit meaning representations .
Introduction
Our system implements these findings through a notion of functor-argument ‘crawling’, using as our starting point the underspecified logical-form meaning representations provided by a general-purpose deep parser.
System Description
This system operates over the normalized semantic representations provided by the LinGO English Resource Grammar (ERG; Flickinger, 2000).3 The ERG maps surface strings to meaning representations in the format of Minimal Recursion Semantics (MRS; Copestake et al., 2005).
System Description
5 In other words, a possible semantic interpretation of the (string-based) Shared Task annotation guidelines and data is in terms of a quantifier-free approach to meaning representation , or in terms of one where quantifier scope need not be made explicit (as once suggested by, among others, Alshawi, 1992).
meaning representations is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Yao, Xuchen and Van Durme, Benjamin
Abstract
Those efforts map questions to sophisticated meaning representations that are then attempted to be matched against Viable answer candidates in the knowledge base.
Background
The model challenge involves finding the best meaning representation for the question, converting it into a query and executing the query on the KB.
Background
More recent research started to minimize this direct supervision by using latent meaning representations (Berant et
Background
We instead attack the problem of QA from a KB from an IE perspective: we learn directly the pattern of QA pairs, represented by the dependency parse of questions and the Freebase structure of answer candidates, without the use of intermediate, general purpose meaning representations .
Introduction
Typically questions are converted into some meaning representation (e. g., the lambda calculus), then mapped to database queries.
meaning representations is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Flanigan, Jeffrey and Thomson, Sam and Carbonell, Jaime and Dyer, Chris and Smith, Noah A.
Abstract
Abstract Meaning Representation (AMR) is a semantic formalism for which a growing set of annotated examples is available.
Introduction
Semantic parsing is the problem of mapping natural language strings into meaning representations .
Introduction
Abstract Meaning Representation (AMR) (Banarescu et al., 2013; Dorr et al., 1998) is a semantic formalism in which the meaning of a sentence is encoded as a rooted, directed, acyclic graph.
Related Work
While all semantic parsers aim to transform natural language text to a formal representation of its meaning, there is wide variation in the meaning representations and parsing techniques used.
meaning representations is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Gyawali, Bikash and Gardent, Claire
Introduction
To evaluate our approach, we use the benchmark provided by the KBGen challenge (Banik et al., 2012; Banik et al., 2013), a challenge designed to evaluate generation from knowledge bases; where the input is a KB subset; and where the expected output is a complex sentence conveying the meaning represented by the input.
Related Work
(Wong and Mooney, 2007) uses synchronous grammars to transform a variable free tree structured meaning representation into sentences.
Related Work
dom Field to generate from the same meaning representations .
meaning representations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Riezler, Stefan and Simianer, Patrick and Haas, Carolin
Grounding SMT in Semantic Parsing
Embedding SMT in a semantic parsing scenario means to define translation quality by the ability of a semantic parser to construct a meaning representation from the translated query, which returns the correct answer when executed against the database.
Related Work
For example, in semantic parsing, the learning goal is to produce and successfully execute a meaning representation .
Response-based Online Learning
(2010) or Goldwasser and Roth (2013) describe a response-driven learning framework for the area of semantic parsing: Here a meaning representation is “tried out” by itera-tively generating system outputs, receiving feedback from world interaction, and updating the model parameters.
meaning representations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: