Index of papers in Proc. ACL 2014 that mention
  • semantic representations
Hermann, Karl Moritz and Blunsom, Phil
Abstract
We present a novel technique for learning semantic representations , which extends the distributional hypothesis to multilingual data and joint-space embeddings.
Abstract
We extend our approach to learn semantic representations at the document level, too.
Approach
Most prior work on learning compositional semantic representations employs parse trees on their training data to structure their composition functions (Socher et al., 2012; Hermann and Blunsom, 2013, inter alia).
Approach
We utilise this diversity to abstract further from monolingual surface realisations to deeper semantic representations .
Approach
Assume two functions f : X —> Rd and g : Y —> Rd, which map sentences from languages cc and 3/ onto distributed semantic representations in Rd.
Introduction
We present a novel unsupervised technique for learning semantic representations that leverages parallel corpora and employs semantic transfer through compositional representations.
Introduction
The results on this task, in comparison with a number of strong baselines, further demonstrate the relevance of our approach and the success of our method in learning multilingual semantic representations over a wide range of languages.
Overview
Here, we focus on learning semantic representations and investigate how the use of multilingual data can improve learning such representations at the word and higher level.
Overview
We describe a multilingual objective function that uses a noise-contrastive update between semantic representations of different languages to learn these word embeddings.
Overview
As part of this, we use a compositional vector model (CVM, henceforth) to compute semantic representations of sentences and documents.
semantic representations is mentioned in 17 sentences in this paper.
Topics mentioned in this paper:
Fyshe, Alona and Talukdar, Partha P. and Murphy, Brian and Mitchell, Tom M.
Experimental Results
This gives us a semantic representation of each of the 60 words in a 218-dimensional behavioral space.
Experimental Results
It is possible that some JNNSE(Brain+Text) dimensions are being used exclusively to fit brain activation data, and not the semantics represented in both brain and corpus data.
Experimental Results
This result shows that neural semantic representations can create a latent representation that is faithful to unseen corpus statistics, providing further evidence that the two data sources share a strong common element.
Introduction
For example, multiple word senses collide in the same vector, and noise from mis-parsed sentences or spam documents can interfere with the final semantic representation .
Introduction
In this work we focus on the scientific question: Can the inclusion of brain data improve semantic representations learned from corpus data?
Joint NonNegative Sparse Embedding
One could also use a topic model style formulation to represent this semantic representation task.
Joint NonNegative Sparse Embedding
The same idea could be applied here: the latent semantic representation generates the observed brain activity and corpus statistics.
Joint NonNegative Sparse Embedding
For example, models with behavioral data (Sil-berer and Lapata, 2012) and models with visual information (Bruni et al., 2011; Silberer et al., 2013) have both shown to improve semantic representations .
semantic representations is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Krishnamurthy, Jayant and Mitchell, Tom M.
Abstract
The trained parser produces a full syntactic parse of any sentence, while simultaneously producing logical forms for portions of the sentence that have a semantic representation within the parser’s predicate vocabulary.
Abstract
We demonstrate our approach by training a parser whose semantic representation contains 130 predicates from the NELL ontology.
Discussion
Our parser ASP produces a full syntactic parse of any sentence, while simultaneously producing logical forms for sentence spans that have a semantic representation within its predicate vocabulary.
Introduction
We suggest that a large populated knowledge base should play a key role in syntactic and semantic parsing: in training the parser, in resolving syntactic ambiguities when the trained parser is applied to new text, and in its output semantic representation .
Introduction
A semantic representation tied to a knowledge base allows for powerful inference operations — such as identifying the possible entity referents of a noun phrase — that cannot be performed with shallower representations (e.g., frame semantics (Baker et al., 1998) or a direct conversion of syntax to logic (B08, 2005)).
Introduction
Our parser produces a full syntactic parse of every sentence, and furthermore produces logical forms for portions of the sentence that have a semantic representation within the parser’s predicate vocabulary.
Prior Work
This synergy gives our parser a richer semantic representation than previous work, while simultaneously enabling broad coverage.
semantic representations is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Bengoetxea, Kepa and Agirre, Eneko and Nivre, Joakim and Zhang, Yue and Gojenola, Koldo
Experimental Framework
Finally, we will describe the different types of semantic representation that were used.
Experimental Framework
We will experiment with the semantic representations used in Agirre et a1.
Experimental Framework
We experiment with both full 88s and SFs as instances of fine-grained and coarse-grained semantic representation , respectively.
semantic representations is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Kiela, Douwe and Hill, Felix and Korhonen, Anna and Clark, Stephen
Abstract
Models that learn semantic representations from both linguistic and perceptual input outperform text-only models in many contexts and better reflect human concept acquisition.
Experimental Approach
This model learns high quality lexical semantic representations based on the distributional properties of words in text, and has been shown to outperform simple distributional models on applications such as semantic composition and analogical mapping (Mikolov et al., 2013b).
Experimental Approach
The USP norms have been used in many previous studies to evaluate semantic representations (Andrews et al., 2009; Feng and Lapata, 2010; Silberer and Lapata, 2012; Roller and Schulte im Walde, 2013).
Introduction
Multi-modal models in which perceptual input is filtered according to our algorithm learn higher-quality semantic representations than previous approaches, resulting in a significant performance improvement of up to 17% in captur-
semantic representations is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Packard, Woodley and Bender, Emily M. and Read, Jonathon and Oepen, Stephan and Dridan, Rebecca
System Description
This system operates over the normalized semantic representations provided by the LinGO English Resource Grammar (ERG; Flickinger, 2000).3 The ERG maps surface strings to meaning representations in the format of Minimal Recursion Semantics (MRS; Copestake et al., 2005).
System Description
Our crawling rules operate on semantic representations , but the annotations are with reference to the surface string.
System Description
In terms of our operations defined over semantic representations , this is rendered as follows: all arguments of the negated verb are selected by argument crawling, all in-tersective modifiers by label crawling, and functor crawling (Fig.
semantic representations is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
McKinley, Nathan and Ray, Soumya
Related Work
Any combination which contains a semantic representation equivalent to the input at the conclusion of the algorithm is a valid output from a chart generation system.
Related Work
This is then used by a surface realization module which encodes the enriched semantic representation into natural language.
Sentence Tree Realization with UCT
For instance, a communicative goal of ‘red(d), dog(d)’ (in English, “say anything about a dog which is red.”) would match a sentence with the semantic representation ‘red(subj), dog(subj), cat(obj), chased(subj, obj)’, like “The red dog chased the cat”, for instance.
semantic representations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Narayan, Shashi and Gardent, Claire
Abstract
First, it is semantic based in that it takes as input a deep semantic representation rather than e.g., a sentence or a parse tree.
Introduction
While previous simplification approaches starts from either the input sentence or its parse tree, our model takes as input a deep semantic representation namely, the Discourse Representation Structure (DRS, (Kamp, 1981)) assigned by Boxer (Curran et al., 2007) to the input complex sentence.
Simplification Framework
By handling deletion using a probabilistic model trained on semantic representations , we can avoid deleting obligatory arguments.
semantic representations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Paperno, Denis and Pham, Nghia The and Baroni, Marco
The practical lexical function model
The general form of a semantic representation for a linguistic unit is an ordered tuple of a vector and n E N matrices:1
The practical lexical function model
The form of semantic representations we are using is shown in Table l.2
The practical lexical function model
The semantic representations we propose include a semantic vector for constituents of any semantic type, thus enabling semantic comparison for words of different parts of speech (the case of demolition vs. demolish).
semantic representations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Silberer, Carina and Lapata, Mirella
Autoencoders for Grounded Semantics
3.2 Semantic Representations
Introduction
In general, these models specify mechanisms for constructing semantic representations from text corpora based on the distributional hypothesis (Harris, 1970): words that appear in similar linguistic contexts are likely to have related meanings.
Introduction
Our model uses stacked autoencoders (Bengio et al., 2007) to induce semantic representations integrating visual and textual information.
semantic representations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Tian, Ran and Miyao, Yusuke and Matsuzaki, Takuya
Conclusion and Discussion
Other directions of our future work include further exploitation of the new semantic representation .
Experiments
Since our system uses an off-the-shelf dependency parser, and semantic representations are obtained from simple rule-based conversion from dependency trees, there will be only one (right or wrong) interpretation in face of ambiguous sentences.
The Idea
Optimistically, we believe DCS can provide a framework of semantic representation with sufficiently wide coverage for real-world texts.
semantic representations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Zhang, Jiajun and Liu, Shujie and Li, Mu and Zhou, Ming and Zong, Chengqing
Bilingually-constrained Recursive Auto-encoders
Fortunately, we know the fact that the two phrases should share the same semantic representation if they express the same meaning.
Bilingually-constrained Recursive Auto-encoders
The above equation also indicates that the source-side parameters 65 can be optimized independently as long as the semantic representation pt of the target phrase 75 is given to compute Esem(s|t, 6) with Eq.
Discussions
For example, as each node in the recursive auto-encoder shares the same weight matrix, the BRAE model would become weak at learning the semantic representations for long sentences with tens of words.
semantic representations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: