Index of papers in Proc. ACL 2014 that mention
  • semantic space
Lazaridou, Angeliki and Bruni, Elia and Baroni, Marco
Experimental Setup
4.2 Visual Semantic Spaces
Experimental Setup
4.3 Linguistic Semantic Spaces
Introduction
This is achieved by means of a simple neural network trained to project image-extracted feature vectors to text-based vectors through a hidden layer that can be interpreted as a cross-modal semantic space .
Introduction
We first test the effectiveness of our cross-modal semantic space on the so-called zero-shot learning task (Palatucci et al., 2009), which has recently been explored in the machine learning community (Frome et al., 2013; Socher et al., 2013).
Introduction
We show that the induced cross-modal semantic space is powerful enough that sensible guesses about the correct word denoting an object can be made, even when the linguistic context vector representing the word has been created from as little as 1 sentence containing it.
Related Work
Most importantly, by projecting visual representations of objects into a shared semantic space , we do not limit ourselves to establishing a link between ob-
Related Work
(2013) focus on zero-shot learning in the vision-language domain by exploiting a shared visual-linguistic semantic space .
Related Work
(2013) learn to project unsupervised vector-based image representations onto a word-based semantic space using a neural network architecture.
Zero-shot learning and fast mapping
Concretely, we assume that concepts, denoted for convenience by word labels, are represented in linguistic terms by vectors in a text-based distributional semantic space (see Section 4.3).
Zero-shot learning and fast mapping
Objects corresponding to concepts are represented in visual terms by vectors in an image-based semantic space (Section 4.2).
semantic space is mentioned in 18 sentences in this paper.
Topics mentioned in this paper:
Perek, Florent
Application of the vector-space model
One of the advantages conferred by the quantification of semantic similarity is that lexical items can be precisely considered in relation to each other, and by aggregating the similarity information for all items in the distribution, we can produce a visual representation of the structure of the semantic domain of the construction in order to observe how verbs in that domain are related to each other, and to immediately identify the regions of the semantic space that are densely populated (with tight clusters of verbs), and those that are more sparsely populated (fewer and/or more scattered verbs).
Application of the vector-space model
Outside of these two clusters, the semantic space is much more sparsely populated.
Application of the vector-space model
In sum, the semantic plots show that densely populated regions of the semantic space appear to be the most likely to attract new members.
Distributional measure of semantic similarity
The resulting matrix, which contains the distributional information (in 4,683 columns) for 92 verbs occurring in the hell-construction, constitutes the semantic space under consideration in this case study.
Distributional measure of semantic similarity
Besides, using the same data presents the advantage that the distribution is modeled with the same semantic space in all time periods, which makes it easier to visualize changes.
Introduction
Coverage relates to how the semantic domain of a construction is populated in the vicinity of a given target coinage, and in particular to the density of the semantic space .
semantic space is mentioned in 14 sentences in this paper.
Topics mentioned in this paper:
Dinu, Georgiana and Baroni, Marco
Conclusion
(2012) reconstruct phrase tables based on phrase similarity scores in semantic space .
Introduction
Recent work on grounding language in vision shows that it is possible to represent images and linguistic expressions in a common vector-based semantic space (Frome et al., 2013; Socher et al., 2013).
Introduction
Translation is another potential application of the generation framework: Given a semantic space shared between two or more languages, one can compose a word sequence in one language and generate translations in another, with the shared semantic vector space functioning as interlingua.
Noun phrase translation
Creation of cross-lingual vector spaces A common semantic space is required in order to map words and phrases across languages.
Noun phrase translation
Cross-lingual decomposition training Training proceeds as in the monolingual case, this time concatenating the training data sets and estimating a single (de)-composition function for the two languages in the shared semantic space .
semantic space is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Fyshe, Alona and Talukdar, Partha P. and Murphy, Brian and Mitchell, Tom M.
Experimental Results
We compared J NNSE(Brain+Text) and NNSE(Text) models by measuring the correlation of all pairwise distances in J NNSE(Brain+Text) and NNSE(Text) space to the pairwise distances in the 218-dimensional semantic space .
Experimental Results
Figure 1: Correlation of JNNSE(Brain+Text) and NNSE(Text) models with the distances in a semantic space constructed from behavioral data.
Experimental Results
screwdriver and hammer) are closer in semantic space than words in different word categories, which makes some 2 vs. 2 tests more difficult than others.
NonNegative Sparse Embedding
The sparse and nonnegative representation in A produces a more interpretable semantic space , where interpretability is quantified with a behavioral task (Chang et al., 2009; Murphy et al., 2012a).
semantic space is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Hermann, Karl Moritz and Blunsom, Phil
Conclusion
Further experiments and analysis support our hypothesis that bilingual signals are a useful tool for learning distributed representations by enabling models to abstract away from monolingual surface realisations into a deeper semantic space .
Experiments
This setting causes words from all languages to be embedded in a single semantic space .
Experiments
These results further support our hypothesis that the bilingual contrastive error function can learn semantically plausible embeddings and furthermore, that it can abstract away from monolingual surface realisations into a shared semantic space across languages.
Introduction
Unlike most methods for learning word representations, which are restricted to a single language, our approach learns to represent meaning across languages in a shared multilingual semantic space .
Related Work
(2013), that learn embeddings across a large variety of languages and models such as ours, that learn joint embeddings, that is a projection into a shared semantic space across multiple languages.
semantic space is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Zhang, Jiajun and Liu, Shujie and Li, Mu and Zhou, Ming and Zong, Chengqing
Conclusions and Future Work
the semantic space in one language to the other.
Experiments
Given a phrase pair (3, t), the BRAE model first obtains their semantic phrase representations (p8, pt), and then transforms p8 into target semantic space 198*, pt into source semantic space 1975*.
Introduction
Furthermore, a transformation function between the Chinese and English semantic spaces can be learned as well.
Related Work
Although we also follow the composition-based phrase embedding, we are the first to focus on the semantic meanings of the phrases and propose a bilingually-constrained model to induce the semantic information and learn transformation of the semantic space in one language to the other.
semantic space is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Baroni, Marco and Dinu, Georgiana and Kruszewski, Germán
Conclusion
Add to this that, beyond the standard lexical semantics challenges we tested here, predict models are currently been successfully applied in cutting-edge domains such as representing phrases (Mikolov et al., 2013c; Socher et al., 2012) or fusing language and vision in a common semantic space (Frome et al., 2013; Socher et al., 2013).
Evaluation materials
(2012) with a method that is in the spirit of the predict models, but lets synonymy information from WordNet constrain the learning process (by favoring solutions in which WordNet synonyms are near in semantic space ).
Evaluation materials
Systems are evaluated in terms of proportion of questions where the nearest neighbour from the whole semantic space is the correct answer (the given example and test vector triples are excluded from the nearest neighbour search).
semantic space is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: