Background | The goal of semantic parsing is to map text to a complete and detailed meaning representation (Mooney, 2007). |
Background | The standard language for meaning representation is first-order logic or a sublanguage, such as FunQL (Kate et al., 2005; Clarke et al., 2010) and lambda calculus (Zettlemoyer and Collins, 2005; Zettlemoyer and Collins, 2007). |
Background | Poon & Domingos (2009, 2010) induce a meaning representation by clustering synonymous lambda-calculus forms stemming from partitions of dependency trees. |
Conclusion | This paper introduces grounded unsupervised semantic parsing, which leverages available database for indirect supervision and uses a grounded meaning representation to account for syntax-semantics mismatch in dependency-based semantic parsing. |
Grounded Unsupervised Semantic Parsing | To combat this problem, GUSP introduces a novel dependency-based meaning representation with an augmented state space to account for semantic relations that are nonlocal in the dependency tree. |
Grounded Unsupervised Semantic Parsing | However, GUSP uses a different meaning representation defined over individual nodes and edges, rather than partitions, which enables linear-time exact inference. |
Grounded Unsupervised Semantic Parsing | Their approach alleviates some complexity in the meaning representation for handling syntax-semantics mismatch, but it has to search over a much larger search space involving exponentially many candidate trees. |
Introduction | Semantic parsing maps text to a formal meaning representation such as logical forms or structured queries. |
Introduction | To handle syntax-semantics mismatch, GUSP introduces a novel dependency-based meaning representation |
Abstract | Semantic parsing is the problem of deriving a structured meaning representation from a natural language utterance. |
Conclusions | We have presented a semantic parser which uses techniques from machine translation to learn mappings from natural language to variable-free meaning representations . |
Introduction | Semantic parsing (SP) is the problem of transforming a natural language (NL) utterance into a machine-interpretable meaning representation (MR). |
Introduction | At least superficially, SP is simply a machine translation (MT) task: we transform an NL utterance in one language into a statement of another (unnatural) meaning representation language (MRL). |
MT—based semantic parsing | Linearization We assume that the MRL is variable-free (that is, the meaning representation for each utterance is tree-shaped), noting that for-malisms with variables, like the A-calculus, can be mapped onto variable-free logical forms with combinatory logics (Curry et al., 1980). |
Related Work | Other work which generalizes from variable-free meaning representations to A-calculus expressions includes the natural language generation procedure described by Lu and Ng (2011). |
Conclusion | The experiment shows that it can correctly learn the meaning representations in terms of HMM parameters for our lexical entries, from highly ambiguous training data. |
Introduction | Language is grounded by mapping words, phrases, and sentences to meaning representations referring to the world. |
Introduction | Dominey and Boucher (2005) paired narrated sentences with symbolic representations of their meanings, automatically extracted from video, to learn object names, spatial-relation terms, and event names as a mapping from the grammatical structure of a sentence to the semantic structure of the associated meaning representation . |
Introduction | Chen and Mooney (2008) learned the language of sportscasting by determining the mapping between game commentaries and the meaning representations output by a rule-based simulation of the game. |