Index of papers in Proc. ACL that mention
  • semantic representations
Hermann, Karl Moritz and Blunsom, Phil
Abstract
We present a novel technique for learning semantic representations , which extends the distributional hypothesis to multilingual data and joint-space embeddings.
Abstract
We extend our approach to learn semantic representations at the document level, too.
Approach
Most prior work on learning compositional semantic representations employs parse trees on their training data to structure their composition functions (Socher et al., 2012; Hermann and Blunsom, 2013, inter alia).
Approach
We utilise this diversity to abstract further from monolingual surface realisations to deeper semantic representations .
Approach
Assume two functions f : X —> Rd and g : Y —> Rd, which map sentences from languages cc and 3/ onto distributed semantic representations in Rd.
Introduction
We present a novel unsupervised technique for learning semantic representations that leverages parallel corpora and employs semantic transfer through compositional representations.
Introduction
The results on this task, in comparison with a number of strong baselines, further demonstrate the relevance of our approach and the success of our method in learning multilingual semantic representations over a wide range of languages.
Overview
Here, we focus on learning semantic representations and investigate how the use of multilingual data can improve learning such representations at the word and higher level.
Overview
We describe a multilingual objective function that uses a noise-contrastive update between semantic representations of different languages to learn these word embeddings.
Overview
As part of this, we use a compositional vector model (CVM, henceforth) to compute semantic representations of sentences and documents.
semantic representations is mentioned in 17 sentences in this paper.
Topics mentioned in this paper:
Chang, Kai-min K. and Cherkassky, Vladimir L. and Mitchell, Tom M. and Just, Marcel Adam
Abstract
Recent advances in functional Magnetic Resonance Imaging (fMRI) offer a significant new approach to studying semantic representations in humans by making it possible to directly observe brain activity while people comprehend words and sentences.
Brain Imaging Experiments on Adj ec-tive-Noun Comprehension
4 Using vector-based models of semantic representation to account for the systematic variances in neural activity
Brain Imaging Experiments on Adj ec-tive-Noun Comprehension
4.1 Lexical Semantic Representation
Brain Imaging Experiments on Adj ec-tive-Noun Comprehension
Table 3 shows the semantic representation for strong and dog.
Introduction
There have been a variety of approaches from different scientific communities trying to characterize semantic representations .
Introduction
Recent advances in functional Magnetic Resonance Imaging (fMRI) provide a significant new approach to studying semantic representations in humans by making it possible to directly observe brain activity while people comprehend words and sentences.
Introduction
Given these early succesess in using fMRI to discriminate categorial information and to model lexical semantic representations of individual words, it is interesting to ask whether a similar approach can be used to study the representation of adjective-noun phrases.
semantic representations is mentioned in 12 sentences in this paper.
Topics mentioned in this paper:
Agirre, Eneko and Baldwin, Timothy and Martinez, David
Abstract
We devise a gold-standard sense- and parse tree-annotated dataset based on the intersection of the Penn Treebank and SemCor, and experiment with different approaches to both semantic representation and disambiguation.
Experimental setting
Below, we outline the dataset used in this research and the parser evaluation methodology, explain the methodology used to perform PP attachment, present the different options for semantic representation , and finally detail the disambiguation methods.
Experimental setting
The gold-standard sense annotations allow us to perform upper bound evaluation of the relative impact of a given semantic representation on parsing and PP attachment performance, to contrast with the performance in more realistic semantic disambiguation settings.
Experimental setting
4.3 Semantic representation
Integrating Semantics into Parsing
There are three main aspects that we have to consider in this process: (i) the semantic representation , (ii) semantic disambiguation, and (iii) morphology.
Integrating Semantics into Parsing
The more fine-grained our semantic representation , the higher the average polysemy and the greater the need to distinguish between these senses.
Introduction
We explore several models for semantic representation , based around WordNet (Fellbaum, 1998).
Introduction
In experimenting with different semantic representations , we require some strategy to disambiguate the semantic class of polysemous words in context (e. g. determining for each instance of crane whether it refers to an animal or a lifting device).
semantic representations is mentioned in 20 sentences in this paper.
Topics mentioned in this paper:
Koller, Alexander and Thater, Stefan
Abstract
A corpus-based evaluation with a large-scale grammar shows that our algorithm reduces over 80% of sentences to one or two readings, in negligible runtime, and thus makes it possible to work with semantic representations derived by deep large-scale grammars.
Conclusion
The algorithm presented here makes it possible, for the first time, to derive a single meaningful semantic representation from the syntactic analysis of a deep grammar on a large scale.
Conclusion
In the future, it will be interesting to explore how these semantic representations can be used in applications.
Conclusion
We could then perform such inferences on (cleaner) semantic representations , rather than strings (as they do).
Introduction
Over the past few years, there has been considerable progress in the ability of manually created large-scale grammars, such as the English Resource Grammar (ERG, Copestake and Flickinger (2000)) or the ParGram grammars (Butt et al., 2002), to parse wide-coverage text and assign it deep semantic representations .
Introduction
While applications should benefit from these very precise semantic representations, their usefulness is limited by the presence of semantic ambiguity: On the Rondane Treebank (Oepen et al., 2002), the ERG computes an average of several million semantic representations for each sentence, even when the syntactic analysis is fixed.
Introduction
We follow an underspecification approach to managing ambiguity: Rather than deriving all semantic representations from the syntactic analysis, we work with a single, compact underspecified semantic representation, from which the semantic representations can then be extracted by need.
Related work
The idea of deriving a single approximative semantic representation for ambiguous sentences goes back to Hobbs (1983); however, Hobbs only works his algorithm out for a restricted class of quantifiers, and his representations can be weaker than our weakest readings.
Related work
The work presented here is related to other approaches that reduce the set of readings of an underspecified semantic representation (USR).
Underspecification
Both of these formalisms can be used to model scope ambiguities compactly by regarding the semantic representations of a sentence as trees.
semantic representations is mentioned in 11 sentences in this paper.
Topics mentioned in this paper:
Titov, Ivan and Kozhevnikov, Mikhail
A Model of Semantics
Though the most likely alignment 6.3- for a fixed semantic representation fizj can be found efficiently using a Viterbi algorithm, computing the most probable pair (éj, fly) is still intractable.
A Model of Semantics
We use a modification of the beam search algorithm, where we keep a set of candidate meanings (partial semantic representations ) and compute an alignment for each of them using a form of the Viterbi algorithm.
Abstract
We argue that groups of unannotated texts with overlapping and noncontradictory semantics represent a valuable source of information for learning semantic representations .
Abstract
A simple and efficient inference method recursively induces joint semantic representations for each group and discovers correspondence between lexical entries and latent semantic concepts.
Inference with NonContradictory Documents
Even though the dependencies are only conveyed via {mj : j 75 the space of possible meanings m is very large even for relatively simple semantic representations , and, therefore, we need to resort to efficient approximations.
Inference with NonContradictory Documents
However, a major weakness of this algorithm is that decisions about components of the composite semantic representation (e. g., argument values) are made only on the basis of a single text, which first mentions the corresponding aspects, without consulting any future texts k’ > k, and these decisions cannot be revised later.
Introduction
Alternatively, if such groupings are not available, it may still be easier to give each semantic representation (or a state) to multiple annotators and ask each of them to provide a textual description, instead of annotating texts with semantic expressions.
Introduction
Unsupervised learning with shared latent semantic representations presents its own challenges, as exact inference requires marginalization over possible assignments of the latent semantic state, consequently, introducing nonlocal statistical dependencies between the decisions about the semantic structure of each text.
Related Work
Sentence and text alignment has also been considered in the related context of paraphrase extraction (see, e.g., (Dolan et al., 2004; Barzilay and Lee, 2003)) but this prior work did not focus on inducing or learning semantic representations .
Summary and Future Work
In this work we studied the use of weak supervision in the form of noncontradictory relations between documents in learning semantic representations .
Summary and Future Work
However, exact inference for groups of documents with overlapping semantic representation is generally prohibitively expensive, as the shared latent semantics introduces nonlocal dependences between semantic representations of individual documents.
semantic representations is mentioned in 12 sentences in this paper.
Topics mentioned in this paper:
Narisawa, Katsuma and Watanabe, Yotaro and Mizuno, Junta and Okazaki, Naoaki and Inui, Kentaro
Introduction
We describe a method of normalizing numerical expressions referring to the same amount in text into a unified semantic representation .
Related work
For instance, the context of 319 people in the sentence 319 people face a water shortage is “face” and “water shortage.” In order to extract and aggregate numerical expressions in various documents, we converted the numerical expressions into semantic representations (to be described in Section 4.1), and extracted their context (to be described in Section 4.2).
Related work
Numerical Semantic representation Expression Value | Unit ‘ Mod.
Related work
The first step for collecting numerical expressions is to recognize when a numerical expression is mentioned and then to normalize it into a semantic representation .
semantic representations is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Fyshe, Alona and Talukdar, Partha P. and Murphy, Brian and Mitchell, Tom M.
Experimental Results
This gives us a semantic representation of each of the 60 words in a 218-dimensional behavioral space.
Experimental Results
It is possible that some JNNSE(Brain+Text) dimensions are being used exclusively to fit brain activation data, and not the semantics represented in both brain and corpus data.
Experimental Results
This result shows that neural semantic representations can create a latent representation that is faithful to unseen corpus statistics, providing further evidence that the two data sources share a strong common element.
Introduction
For example, multiple word senses collide in the same vector, and noise from mis-parsed sentences or spam documents can interfere with the final semantic representation .
Introduction
In this work we focus on the scientific question: Can the inclusion of brain data improve semantic representations learned from corpus data?
Joint NonNegative Sparse Embedding
One could also use a topic model style formulation to represent this semantic representation task.
Joint NonNegative Sparse Embedding
The same idea could be applied here: the latent semantic representation generates the observed brain activity and corpus statistics.
Joint NonNegative Sparse Embedding
For example, models with behavioral data (Sil-berer and Lapata, 2012) and models with visual information (Bruni et al., 2011; Silberer et al., 2013) have both shown to improve semantic representations .
semantic representations is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Titov, Ivan and Klementiev, Alexandre
Abstract
We argue that multilingual parallel data provides a valuable source of indirect supervision for induction of shallow semantic representations .
Conclusions
Although in this work we focused primarily on improving performance for each individual language, crosslingual semantic representation could be extracted by a simple postprocessing step.
Inference
5This has been explored before for shallow semantic representations (Lang and Lapata, 2011a; Titov and Klementiev, 201 1).
Introduction
The goal of this work is to show that parallel data is useful in unsupervised induction of shallow semantic representations .
Introduction
Though syntactic representations are often predictive of semantic roles (Levin, 1993), the interface between syntactic and semantic representations is far from trivial.
Related Work
However, most of this research has focused on induction of syntactic structures (Kuhn, 2004; Snyder et al., 2009) or morphologic analysis (Snyder and Barzilay, 2008) and we are not aware of any previous work on induction of semantic representations in the crosslingual setting.
Related Work
Learning of semantic representations in the context of monolingual weakly-parallel data was studied in Titov and Kozhevnikov (2010) but their setting was semi-supervised and they experimented only on a restricted domain.
Related Work
Semi-supervised and weakly-supervised techniques have also been explored for other types of semantic representations but these studies again have mostly focused on restricted domains (Kate and Mooney, 2007; Liang et al., 2009; Goldwasser et al., 2011; Liang et al., 2011).
semantic representations is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Krishnamurthy, Jayant and Mitchell, Tom M.
Abstract
The trained parser produces a full syntactic parse of any sentence, while simultaneously producing logical forms for portions of the sentence that have a semantic representation within the parser’s predicate vocabulary.
Abstract
We demonstrate our approach by training a parser whose semantic representation contains 130 predicates from the NELL ontology.
Discussion
Our parser ASP produces a full syntactic parse of any sentence, while simultaneously producing logical forms for sentence spans that have a semantic representation within its predicate vocabulary.
Introduction
We suggest that a large populated knowledge base should play a key role in syntactic and semantic parsing: in training the parser, in resolving syntactic ambiguities when the trained parser is applied to new text, and in its output semantic representation .
Introduction
A semantic representation tied to a knowledge base allows for powerful inference operations — such as identifying the possible entity referents of a noun phrase — that cannot be performed with shallower representations (e.g., frame semantics (Baker et al., 1998) or a direct conversion of syntax to logic (B08, 2005)).
Introduction
Our parser produces a full syntactic parse of every sentence, and furthermore produces logical forms for portions of the sentence that have a semantic representation within the parser’s predicate vocabulary.
Prior Work
This synergy gives our parser a richer semantic representation than previous work, while simultaneously enabling broad coverage.
semantic representations is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Uematsu, Sumire and Matsuzaki, Takuya and Hanaoka, Hiroki and Miyao, Yusuke and Mima, Hideki
Background
1, each category is associated with a lambda term of semantic representations , and each combinatory rule is associated with rules for semantic composition.
Background
Since these rules are universal, we can obtain different semantic representations by switching the semantic representations of lexical categories.
Background
coordination and semantic representation in particular.
Corpus integration and conversion
12) must be used to construct the semantic representation , namely the PAS.
semantic representations is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Blanco, Eduardo and Moldovan, Dan
Approach to Semantic Representation of Negation
Several options arise to thoroughly represent s. First, we find it useful to consider the semantic representation of the affirmative counterpart: AGENT(the cow, ate), THEME(grass, ate), and INSTRUMENT(With a fork, ate).
Approach to Semantic Representation of Negation
Table 2 depicts five different possible semantic representations .
Approach to Semantic Representation of Negation
It corresponds to the semantic representation of the affirmative counterpart after applying the pseudo-relation NOT over the focus of the negation.
Conclusions
In this paper, we present a novel way to semantically represent negation using focus detection.
Negation in Natural Language
The main contributions are: (l) interpretation of negation using focus detection; (2) focus of negation annotation over all PropBank negated sen-tencesl; (3) feature set to detect the focus of negation; and (4) model to semantically represent negation and reveal its underlying positive meaning.
semantic representations is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Bender, Emily M.
Abstract
Despite large typological differences between Wambaya and the languages on which the development of the resource was based, the Grammar Matrix is found to provide a significant jump-start in the creation of the grammar for Wambaya: With less than 5.5 person-weeks of development, the Wambaya grammar was able to assign correct semantic representations to 76% of the sentences in a naturally occurring text.
Background
The core type hierarchy defines the basic feature geometry, the ways that heads combine with arguments and adjuncts, linking types for relating syntactic to semantic arguments, and the constraints required to compositionally build up semantic representations in the format of Minimal Recursion Semantics (Copestake et al., 2005; Flickinger and Bender, 2003).
Background
To relate such discontinuous noun phrases to appropriate semantic representations where ‘having-
Wambaya grammar
The linguistic analyses encoded in the grammar serve to map the surface strings to semantic representations (in Minimal Recursion Semantics (MRS) format (Copestake et al., 2005)).
Wambaya grammar
This section has presented the Matrix-derived grammar of Wambaya, illustrating its semantic representations and analyses and measuring its performance against held-out data.
semantic representations is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Mitchell, Jeff and Lapata, Mirella and Demberg, Vera and Keller, Frank
Discussion
For example, we could envisage a parser that uses semantic representations to guide its search, e.g., by pruning syntactic analyses that have a low semantic probability.
Introduction
2009); however, the semantic component of these models is limited to semantic role information, rather than attempting to build a full semantic representation for a sentence.
Models of Processing Difficulty
Importantly, composition models are not defined with a specific semantic space in mind, they could easily be adapted to LSA, or simple co-occurrence vectors, or more sophisticated semantic representations (e.g., Griffiths et al.
Models of Processing Difficulty
LDA is a probabilistic topic model offering an alternative to spatial semantic representations .
Results
Besides, replicating Pynte et al.’s (2008) finding, we were also interested in assessing whether the underlying semantic representation (simple semantic space or LDA) and composition function (additive versus multiplicative) modulate reading times differentially.
semantic representations is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Espinosa, Dominic and White, Michael and Mehay, Dennis
Abstract
We call this approach hypertagging, as it operates at a level “above” the syntax, tagging semantic representations with syntactic lexical categories.
Background
This process involves converting the corpus to reflect more precise analyses, Where feasible, and adding semantic representations to the lexical categories.
Conclusion
We have introduced a novel type of supertagger, which we have dubbed a hypertagger, that assigns CCG category labels to elementary predications in a structured semantic representation with high accuracy at several levels of tagging ambiguity in a fashion reminiscent of (Bangalore and Rambow, 2000).
Introduction
We have dubbed this approach hypertagging, as it operates at a level “above” the syntax, moving from semantic representations to syntactic categories.
Results and Discussion
As the effort to engineer a grammar suitable for realization from the CCGbank proceeds in parallel to our work on hypertagging, we expect the hypertagger-seeded realizer to continue to improve, since a more complete and precise extracted grammar should enable more complete realizations to be found, and richer semantic representations should
semantic representations is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Bengoetxea, Kepa and Agirre, Eneko and Nivre, Joakim and Zhang, Yue and Gojenola, Koldo
Experimental Framework
Finally, we will describe the different types of semantic representation that were used.
Experimental Framework
We will experiment with the semantic representations used in Agirre et a1.
Experimental Framework
We experiment with both full 88s and SFs as instances of fine-grained and coarse-grained semantic representation , respectively.
semantic representations is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Silberer, Carina and Ferrari, Vittorio and Lapata, Mirella
Attribute-based Semantic Models
We evaluated the effectiveness of our attribute classifiers by integrating their predictions with traditional text-only models of semantic representation .
Attribute-based Semantic Models
(2004)) to learn a joint semantic representation from the textual and visual modalities.
Introduction
Visual input represents a major source of data from which humans can learn semantic representations of linguistic and nonlinguistic communicative actions (Regier, 1996).
Related Work
Grounding semantic representations with visual information is an instance of multimodal leam-ing.
The Attribute Dataset
On average, each concept was annotated with 19 attributes; approximately 14.5 of these were not part of the semantic representation created by McRae et al.’s (2005) participants for that concept even though they figured in the representations of other concepts.
semantic representations is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Wu, Xianchao and Matsuzaki, Takuya and Tsujii, Jun'ichi
Abstract
A head-driven phrase structure grammar (HPSG) parser is used to obtain the deep syntactic information, which includes a fine-grained description of the syntactic property and a semantic representation of a sentence.
Fine-grained rule extraction
The semantic representation of the new phrase is calculated at the same time.
Fine-grained rule extraction
Second, we can identify sub-trees in a parse tree/forest that correspond to basic units of the semantics, namely sub-trees covering a predicate and its arguments, by using the semantic representation given in the signs.
Introduction
deep syntactic information of an English sentence, which includes a fine-grained description of the syntactic property and a semantic representation of the sentence.
Related Work
The Logon project2 (Oepen et al., 2007) for Norwegian-English translation integrates in-depth grammatical analysis of Norwegian (using lexical functional grammar, similar to (Riezler and Maxwell, 2006)) with semantic representations in the minimal recursion semantics framework, and fully grammar-based generation for English using HPSG.
semantic representations is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Koller, Alexander and Regneri, Michaela and Thater, Stefan
Expressive completeness and redundancy elimination
Koller and Thater (2006) define semantic equivalence in terms of a rewrite system that specifies under what conditions two quantifiers may exchange their positions without changing the meaning of the semantic representation .
Expressive completeness and redundancy elimination
Expressions of natural language itself are (extremely underspecified) descriptions of sets of semantic representations , and so Ebert’s argument applies to NL expressions as well.
Introduction
In the past few years, a “standard model” of scope underspecification has emerged: A range of formalisms from Underspecified DRT (Reyle, 1993) to dominance graphs (Althaus et al., 2003) have offered mechanisms to specify the “semantic material” of which the semantic representations are built up, plus dominance or outscoping relations between these building blocks.
Regular tree grammars
We can now use regular tree grammars in underspecification by representing the semantic representations as trees and taking an RTG G as an underspecified description of the trees in L(G).
semantic representations is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Kiela, Douwe and Hill, Felix and Korhonen, Anna and Clark, Stephen
Abstract
Models that learn semantic representations from both linguistic and perceptual input outperform text-only models in many contexts and better reflect human concept acquisition.
Experimental Approach
This model learns high quality lexical semantic representations based on the distributional properties of words in text, and has been shown to outperform simple distributional models on applications such as semantic composition and analogical mapping (Mikolov et al., 2013b).
Experimental Approach
The USP norms have been used in many previous studies to evaluate semantic representations (Andrews et al., 2009; Feng and Lapata, 2010; Silberer and Lapata, 2012; Roller and Schulte im Walde, 2013).
Introduction
Multi-modal models in which perceptual input is filtered according to our algorithm learn higher-quality semantic representations than previous approaches, resulting in a significant performance improvement of up to 17% in captur-
semantic representations is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Packard, Woodley and Bender, Emily M. and Read, Jonathon and Oepen, Stephan and Dridan, Rebecca
System Description
This system operates over the normalized semantic representations provided by the LinGO English Resource Grammar (ERG; Flickinger, 2000).3 The ERG maps surface strings to meaning representations in the format of Minimal Recursion Semantics (MRS; Copestake et al., 2005).
System Description
Our crawling rules operate on semantic representations , but the annotations are with reference to the surface string.
System Description
In terms of our operations defined over semantic representations , this is rendered as follows: all arguments of the negated verb are selected by argument crawling, all in-tersective modifiers by label crawling, and functor crawling (Fig.
semantic representations is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Abend, Omri and Rappoport, Ari
Abstract
We present UCCA, a novel multilayered framework for semantic representation that aims to accommodate the semantic distinctions expressed through linguistic utterances.
Conclusion
This paper presented Universal Conceptual Cognitive Annotation (UCCA), a novel framework for semantic representation .
Introduction
An extensive comparison of UCCA to existing approaches to syntactic and semantic representation , focusing on the major resources available for English, is found in Section 5.
Related Work
Several annotated corpora offer a joint syntactic and semantic representation .
semantic representations is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Liang, Percy and Jordan, Michael and Klein, Dan
Abstract
In tackling this challenging learning problem, we introduce a new semantic representation which highlights a parallel between dependency syntax and efficient evaluation of logical forms.
Conclusion
Our system is based on a new semantic representation , DCS, which offers a simple and expressive alternative to lambda calculus.
Discussion
A major focus of this work is on our semantic representation , DCS, which offers a new perspective on compositional semantics.
Introduction
The main technical contribution of this work is a new semantic representation , dependency-based compositional semantics (DCS), which is both simple and expressive (Section 2).
semantic representations is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Narayan, Shashi and Gardent, Claire
Abstract
First, it is semantic based in that it takes as input a deep semantic representation rather than e.g., a sentence or a parse tree.
Introduction
While previous simplification approaches starts from either the input sentence or its parse tree, our model takes as input a deep semantic representation namely, the Discourse Representation Structure (DRS, (Kamp, 1981)) assigned by Boxer (Curran et al., 2007) to the input complex sentence.
Simplification Framework
By handling deletion using a probabilistic model trained on semantic representations , we can avoid deleting obligatory arguments.
semantic representations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Paperno, Denis and Pham, Nghia The and Baroni, Marco
The practical lexical function model
The general form of a semantic representation for a linguistic unit is an ordered tuple of a vector and n E N matrices:1
The practical lexical function model
The form of semantic representations we are using is shown in Table l.2
The practical lexical function model
The semantic representations we propose include a semantic vector for constituents of any semantic type, thus enabling semantic comparison for words of different parts of speech (the case of demolition vs. demolish).
semantic representations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Silberer, Carina and Lapata, Mirella
Autoencoders for Grounded Semantics
3.2 Semantic Representations
Introduction
In general, these models specify mechanisms for constructing semantic representations from text corpora based on the distributional hypothesis (Harris, 1970): words that appear in similar linguistic contexts are likely to have related meanings.
Introduction
Our model uses stacked autoencoders (Bengio et al., 2007) to induce semantic representations integrating visual and textual information.
semantic representations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Tian, Ran and Miyao, Yusuke and Matsuzaki, Takuya
Conclusion and Discussion
Other directions of our future work include further exploitation of the new semantic representation .
Experiments
Since our system uses an off-the-shelf dependency parser, and semantic representations are obtained from simple rule-based conversion from dependency trees, there will be only one (right or wrong) interpretation in face of ambiguous sentences.
The Idea
Optimistically, we believe DCS can provide a framework of semantic representation with sufficiently wide coverage for real-world texts.
semantic representations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Zhang, Jiajun and Liu, Shujie and Li, Mu and Zhou, Ming and Zong, Chengqing
Bilingually-constrained Recursive Auto-encoders
Fortunately, we know the fact that the two phrases should share the same semantic representation if they express the same meaning.
Bilingually-constrained Recursive Auto-encoders
The above equation also indicates that the source-side parameters 65 can be optimized independently as long as the semantic representation pt of the target phrase 75 is given to compute Esem(s|t, 6) with Eq.
Discussions
For example, as each node in the recursive auto-encoder shares the same weight matrix, the BRAE model would become weak at learning the semantic representations for long sentences with tens of words.
semantic representations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
McKinley, Nathan and Ray, Soumya
Related Work
Any combination which contains a semantic representation equivalent to the input at the conclusion of the algorithm is a valid output from a chart generation system.
Related Work
This is then used by a surface realization module which encodes the enriched semantic representation into natural language.
Sentence Tree Realization with UCT
For instance, a communicative goal of ‘red(d), dog(d)’ (in English, “say anything about a dog which is red.”) would match a sentence with the semantic representation ‘red(subj), dog(subj), cat(obj), chased(subj, obj)’, like “The red dog chased the cat”, for instance.
semantic representations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Pilehvar, Mohammad Taher and Jurgens, David and Navigli, Roberto
Conclusions
We demonstrate that our semantic representation achieves state-of-the-art performance in three experiments using semantic similarity at different lexical levels (i.e., sense, word, and text), surpassing the performance of previous similarity measures that are often specifically targeted for each level.
Introduction
Despite the potential advantages, few approaches to semantic similarity operate at the sense level due to the challenge in sense-tagging text (Navigli, 2009); for example, none of the top four systems in the recent SemEval-2012 task on textual similarity compared semantic representations that incorporated sense information (Agirre et al., 2012).
Related Work
(2009) used a similar semantic representation of short texts from random walks on WordNet, which was applied to paraphrase recognition and textual entailment.
semantic representations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Lazaridou, Angeliki and Marelli, Marco and Zamparelli, Roberto and Baroni, Marco
Abstract
Semantic representations constructed in this way beat a strong baseline and can be of higher quality than representations directly constructed from corpus data.
Experimental setup
A natural extension of our research is to address morpheme composition and morphological induction jointly, trying to model the intuition that good candidate morphemes should have coherent semantic representations .
Related work
Our goal is to automatically construct, given distributional representations of stems and affixes, semantic representations for the derived words containing those stems and affixes.
semantic representations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Dethlefs, Nina and Hastie, Helen and Cuayáhuitl, Heriberto and Lemon, Oliver
Cohesion across Utterances
3.1 Tree-based Semantic Representations
Cohesion across Utterances
In this way, each nonterminal symbol has a semantic representation and an associated parse category.
Conclusion and Future Directions
We have presented a novel technique for surface realisation that treats generation as a sequence labelling task by combining a CRF with tree-based semantic representations .
semantic representations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Cai, Shu and Knight, Kevin
Conclusion and Future Work
In the future, we plan to investigate how to adapt smatch to other semantic representations .
Related Work
Related work on directly measuring the semantic representation includes the method in (Dri-dan and Oepen, 2011), which evaluates semantic parser output directly by comparing semantic substructures, though they require an alignment between sentence spans and semantic substructures.
Semantic Overlap
Following (Langkilde and Knight, 1998) and (Langkilde-Geary, 2002), we refer to this semantic representation as AMR (Abstract Meaning Representation).
semantic representations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Gardent, Claire and Narayan, Shashi
Experiment and Results
The surface realisation algorithm extends the algorithm proposed in (Gardent and Perez—Beltrachini, 2010) and adapts it to work on the SR dependency input rather than on flat semantic representations .
Related Work
Typically, the input to surface realisation is a structured representation (i.e., a flat semantic representation , a first order logic formula or a dependency tree) rather than a string.
Related Work
Approaches based on reversible grammars (Carroll et al., 1999) have used the semantic formulae output by parsing to evaluate the coverage and performance of their realiser; similarly, (Gardent et al., 2010) developed a tool called GenSem which traverses the grammar to produce flat semantic representations and thereby provide a benchmark for performance and coverage evaluation.
semantic representations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Fowler, Timothy A. D. and Penn, Gerald
Conclusion
First, the ability to extract semantic representations from CCG derivations is not dependent on the language class of a CCG.
Introduction
On the practical side, we have corpora with CCG derivations for each sentence (Hockenmaier and Steedman, 2007), a wide-coverage parser trained on that corpus (Clark and Curran, 2007) and a system for converting CCG derivations into semantic representations (Bos et al., 2004).
Introduction
Bos’s system for building semantic representations from CCG derivations is only possible due to the categorial nature of CCG.
semantic representations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: