Index of papers in Proc. ACL 2010 that mention
  • semantic representations
Koller, Alexander and Thater, Stefan
Abstract
A corpus-based evaluation with a large-scale grammar shows that our algorithm reduces over 80% of sentences to one or two readings, in negligible runtime, and thus makes it possible to work with semantic representations derived by deep large-scale grammars.
Conclusion
The algorithm presented here makes it possible, for the first time, to derive a single meaningful semantic representation from the syntactic analysis of a deep grammar on a large scale.
Conclusion
In the future, it will be interesting to explore how these semantic representations can be used in applications.
Conclusion
We could then perform such inferences on (cleaner) semantic representations , rather than strings (as they do).
Introduction
Over the past few years, there has been considerable progress in the ability of manually created large-scale grammars, such as the English Resource Grammar (ERG, Copestake and Flickinger (2000)) or the ParGram grammars (Butt et al., 2002), to parse wide-coverage text and assign it deep semantic representations .
Introduction
While applications should benefit from these very precise semantic representations, their usefulness is limited by the presence of semantic ambiguity: On the Rondane Treebank (Oepen et al., 2002), the ERG computes an average of several million semantic representations for each sentence, even when the syntactic analysis is fixed.
Introduction
We follow an underspecification approach to managing ambiguity: Rather than deriving all semantic representations from the syntactic analysis, we work with a single, compact underspecified semantic representation, from which the semantic representations can then be extracted by need.
Related work
The idea of deriving a single approximative semantic representation for ambiguous sentences goes back to Hobbs (1983); however, Hobbs only works his algorithm out for a restricted class of quantifiers, and his representations can be weaker than our weakest readings.
Related work
The work presented here is related to other approaches that reduce the set of readings of an underspecified semantic representation (USR).
Underspecification
Both of these formalisms can be used to model scope ambiguities compactly by regarding the semantic representations of a sentence as trees.
semantic representations is mentioned in 11 sentences in this paper.
Topics mentioned in this paper:
Titov, Ivan and Kozhevnikov, Mikhail
A Model of Semantics
Though the most likely alignment 6.3- for a fixed semantic representation fizj can be found efficiently using a Viterbi algorithm, computing the most probable pair (éj, fly) is still intractable.
A Model of Semantics
We use a modification of the beam search algorithm, where we keep a set of candidate meanings (partial semantic representations ) and compute an alignment for each of them using a form of the Viterbi algorithm.
Abstract
We argue that groups of unannotated texts with overlapping and noncontradictory semantics represent a valuable source of information for learning semantic representations .
Abstract
A simple and efficient inference method recursively induces joint semantic representations for each group and discovers correspondence between lexical entries and latent semantic concepts.
Inference with NonContradictory Documents
Even though the dependencies are only conveyed via {mj : j 75 the space of possible meanings m is very large even for relatively simple semantic representations , and, therefore, we need to resort to efficient approximations.
Inference with NonContradictory Documents
However, a major weakness of this algorithm is that decisions about components of the composite semantic representation (e. g., argument values) are made only on the basis of a single text, which first mentions the corresponding aspects, without consulting any future texts k’ > k, and these decisions cannot be revised later.
Introduction
Alternatively, if such groupings are not available, it may still be easier to give each semantic representation (or a state) to multiple annotators and ask each of them to provide a textual description, instead of annotating texts with semantic expressions.
Introduction
Unsupervised learning with shared latent semantic representations presents its own challenges, as exact inference requires marginalization over possible assignments of the latent semantic state, consequently, introducing nonlocal statistical dependencies between the decisions about the semantic structure of each text.
Related Work
Sentence and text alignment has also been considered in the related context of paraphrase extraction (see, e.g., (Dolan et al., 2004; Barzilay and Lee, 2003)) but this prior work did not focus on inducing or learning semantic representations .
Summary and Future Work
In this work we studied the use of weak supervision in the form of noncontradictory relations between documents in learning semantic representations .
Summary and Future Work
However, exact inference for groups of documents with overlapping semantic representation is generally prohibitively expensive, as the shared latent semantics introduces nonlocal dependences between semantic representations of individual documents.
semantic representations is mentioned in 12 sentences in this paper.
Topics mentioned in this paper:
Mitchell, Jeff and Lapata, Mirella and Demberg, Vera and Keller, Frank
Discussion
For example, we could envisage a parser that uses semantic representations to guide its search, e.g., by pruning syntactic analyses that have a low semantic probability.
Introduction
2009); however, the semantic component of these models is limited to semantic role information, rather than attempting to build a full semantic representation for a sentence.
Models of Processing Difficulty
Importantly, composition models are not defined with a specific semantic space in mind, they could easily be adapted to LSA, or simple co-occurrence vectors, or more sophisticated semantic representations (e.g., Griffiths et al.
Models of Processing Difficulty
LDA is a probabilistic topic model offering an alternative to spatial semantic representations .
Results
Besides, replicating Pynte et al.’s (2008) finding, we were also interested in assessing whether the underlying semantic representation (simple semantic space or LDA) and composition function (additive versus multiplicative) modulate reading times differentially.
semantic representations is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Wu, Xianchao and Matsuzaki, Takuya and Tsujii, Jun'ichi
Abstract
A head-driven phrase structure grammar (HPSG) parser is used to obtain the deep syntactic information, which includes a fine-grained description of the syntactic property and a semantic representation of a sentence.
Fine-grained rule extraction
The semantic representation of the new phrase is calculated at the same time.
Fine-grained rule extraction
Second, we can identify sub-trees in a parse tree/forest that correspond to basic units of the semantics, namely sub-trees covering a predicate and its arguments, by using the semantic representation given in the signs.
Introduction
deep syntactic information of an English sentence, which includes a fine-grained description of the syntactic property and a semantic representation of the sentence.
Related Work
The Logon project2 (Oepen et al., 2007) for Norwegian-English translation integrates in-depth grammatical analysis of Norwegian (using lexical functional grammar, similar to (Riezler and Maxwell, 2006)) with semantic representations in the minimal recursion semantics framework, and fully grammar-based generation for English using HPSG.
semantic representations is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Fowler, Timothy A. D. and Penn, Gerald
Conclusion
First, the ability to extract semantic representations from CCG derivations is not dependent on the language class of a CCG.
Introduction
On the practical side, we have corpora with CCG derivations for each sentence (Hockenmaier and Steedman, 2007), a wide-coverage parser trained on that corpus (Clark and Curran, 2007) and a system for converting CCG derivations into semantic representations (Bos et al., 2004).
Introduction
Bos’s system for building semantic representations from CCG derivations is only possible due to the categorial nature of CCG.
semantic representations is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: