Semantic Parsing as Machine Translation
Andreas, Jacob and Vlachos, Andreas and Clark, Stephen

Article Structure

Abstract

Semantic parsing is the problem of deriving a structured meaning representation from a natural language utterance.

Introduction

Semantic parsing (SP) is the problem of transforming a natural language (NL) utterance into a machine-interpretable meaning representation (MR).

MT—based semantic parsing

The input is a corpus of NL utterances paired with MRs.

Experimental setup

Dataset We conduct experiments on the GeoQuery data set.

Results

We first compare the results for the two translation rule extraction models, phrase-based and hierar-chica1 (“MT-phrase” and “MT-hier” respectively in Table 1).

Related Work

WASP, an early automatically-leamed SP system, was strongly influenced by MT techniques.

Discussion

Our results validate the hypothesis that it is possible to adapt an ordinary MT system into a working semantic parser.

Conclusions

We have presented a semantic parser which uses techniques from machine translation to learn mappings from natural language to variable-free meaning representations.

Topics

semantic parser

Appears in 20 sentences as: semantic parser (9) semantic parsers (7) Semantic parsing (2) semantic parsing (3)
In Semantic Parsing as Machine Translation
  1. Semantic parsing is the problem of deriving a structured meaning representation from a natural language utterance.
    Page 1, “Abstract”
  2. Here we approach it as a straightforward machine translation task, and demonstrate that standard machine translation components can be adapted into a semantic parser .
    Page 1, “Abstract”
  3. These results support the use of machine translation methods as an informative baseline in semantic parsing evaluations, and suggest that research in semantic parsing could benefit from advances in machine translation.
    Page 1, “Abstract”
  4. Semantic parsing (SP) is the problem of transforming a natural language (NL) utterance into a machine-interpretable meaning representation (MR).
    Page 1, “Introduction”
  5. Indeed, successful semantic parsers often resemble MT systems in several important respects, including the use of word alignment models as a starting point for rule extraction (Wong and Mooney, 2006; Kwiatkowski et al., 2010) and the use of automata such as tree transducers (Jones et al., 2012) to encode the relationship between NL and MRL.
    Page 1, “Introduction”
  6. In this work we attempt to determine how accurate a semantic parser we can build by treating SP as a pure MT task, and describe pre- and postprocessing steps which allow structure to be preserved in the MT process.
    Page 1, “Introduction”
  7. Our contributions are as follows: We develop a semantic parser using off-the-shelf MT components, exploring phrase-based as well as hierarchical models.
    Page 1, “Introduction”
  8. Experiments with four languages on the popular GeoQuery corpus (Zelle, 1995) show that our parser is competitve with the state-of-the-art, in some cases achieving higher accuracy than recently introduced purpose-built semantic parsers .
    Page 1, “Introduction”
  9. Our approach also appears to require substantially less time to train than the two best-performing semantic parsers .
    Page 1, “Introduction”
  10. In order to learn a semantic parser using MT we linearize the MRs, learn alignments between the MRL and the NL, extract translation rules, and learn a language model for the MRL.
    Page 1, “MT—based semantic parsing”
  11. In order to learn a semantic parser using MT we begin by converting these MRs to a form more similar to NL.
    Page 2, “MT—based semantic parsing”

See all papers in Proc. ACL 2013 that mention semantic parser.

See all papers in Proc. ACL that mention semantic parser.

Back to top.

machine translation

Appears in 7 sentences as: machine translation (9)
In Semantic Parsing as Machine Translation
  1. Here we approach it as a straightforward machine translation task, and demonstrate that standard machine translation components can be adapted into a semantic parser.
    Page 1, “Abstract”
  2. These results support the use of machine translation methods as an informative baseline in semantic parsing evaluations, and suggest that research in semantic parsing could benefit from advances in machine translation .
    Page 1, “Abstract”
  3. At least superficially, SP is simply a machine translation (MT) task: we transform an NL utterance in one language into a statement of another (unnatural) meaning representation language (MRL).
    Page 1, “Introduction”
  4. tsVB also uses a piece of standard MT machinery, specifically tree transducers, which have been profitably employed for syntax-based machine translation (Maletti, 2010).
    Page 4, “Related Work”
  5. The present work is also the first we are aware of which uses phrase-based rather than tree-based machine translation techniques to learn a semantic parser.
    Page 4, “Related Work”
  6. For this reason, we argue for the use of a machine translation baseline as a point of comparison for new methods.
    Page 5, “Discussion”
  7. We have presented a semantic parser which uses techniques from machine translation to learn mappings from natural language to variable-free meaning representations.
    Page 5, “Conclusions”

See all papers in Proc. ACL 2013 that mention machine translation.

See all papers in Proc. ACL that mention machine translation.

Back to top.

MT system

Appears in 7 sentences as: MT system (4) MT systems (3)
In Semantic Parsing as Machine Translation
  1. Indeed, successful semantic parsers often resemble MT systems in several important respects, including the use of word alignment models as a starting point for rule extraction (Wong and Mooney, 2006; Kwiatkowski et al., 2010) and the use of automata such as tree transducers (Jones et al., 2012) to encode the relationship between NL and MRL.
    Page 1, “Introduction”
  2. Language modeling In addition to translation rules learned from a parallel corpus, MT systems also rely on an n-gram language model for the target language, estimated from a (typically larger) monolingual corpus.
    Page 2, “MT—based semantic parsing”
  3. In the results shown in Table 1 we observe that on English GeoQuery data, the hierarchical translation model achieves scores competitive with the state of the art, and in every language one of the MT systems achieves accuracy at least as good as a purpose-built semantic parser.
    Page 4, “Results”
  4. While differences in implementation and factors like programming language choice make a direct comparison of times necessarily imprecise, we note that the MT system takes less than three minutes to train on the GeoQuery corpus, while the publicly-available implementations of tsVB and UBL require roughly twenty minutes and five hours respectively on a 2.1 GHz CPU.
    Page 4, “Results”
  5. UBL, like an MT system (and unlike most of the other systems discussed in this section), extracts rules at multiple levels of granularity by means of this splitting and unification procedure.
    Page 4, “Related Work”
  6. multilevel rules composed from smaller rules, a process similar to the one used for creating phrase tables in a phrase-based MT system .
    Page 5, “Related Work”
  7. Our results validate the hypothesis that it is possible to adapt an ordinary MT system into a working semantic parser.
    Page 5, “Discussion”

See all papers in Proc. ACL 2013 that mention MT system.

See all papers in Proc. ACL that mention MT system.

Back to top.

phrase-based

Appears in 7 sentences as: phrase-based (7)
In Semantic Parsing as Machine Translation
  1. Our contributions are as follows: We develop a semantic parser using off-the-shelf MT components, exploring phrase-based as well as hierarchical models.
    Page 1, “Introduction”
  2. We consider a phrase-based translation model (Koehn et al., 2003) and a hierarchical translation model (Chiang, 2005).
    Page 2, “MT—based semantic parsing”
  3. Rules for the phrase-based model consist of pairs of aligned source and target sequences, while hierarchical rules are SCFG productions containing at most two instances of a single nonterminal symbol.
    Page 2, “MT—based semantic parsing”
  4. Implementation In all experiments, we use the IBM Model 4 implementation from the GIZA++ toolkit (Och and Ney, 2000) for alignment, and the phrase-based and hierarchical models implemented in the Moses toolkit (Koehn et a1., 2007) for rule extraction.
    Page 3, “Experimental setup”
  5. We first compare the results for the two translation rule extraction models, phrase-based and hierar-chica1 (“MT-phrase” and “MT-hier” respectively in Table 1).
    Page 3, “Results”
  6. The present work is also the first we are aware of which uses phrase-based rather than tree-based machine translation techniques to learn a semantic parser.
    Page 4, “Related Work”
  7. multilevel rules composed from smaller rules, a process similar to the one used for creating phrase tables in a phrase-based MT system.
    Page 5, “Related Work”

See all papers in Proc. ACL 2013 that mention phrase-based.

See all papers in Proc. ACL that mention phrase-based.

Back to top.

language model

Appears in 6 sentences as: language model (6) Language modeling (1)
In Semantic Parsing as Machine Translation
  1. In order to learn a semantic parser using MT we linearize the MRs, learn alignments between the MRL and the NL, extract translation rules, and learn a language model for the MRL.
    Page 1, “MT—based semantic parsing”
  2. Language modeling In addition to translation rules learned from a parallel corpus, MT systems also rely on an n-gram language model for the target language, estimated from a (typically larger) monolingual corpus.
    Page 2, “MT—based semantic parsing”
  3. In the case of SP, such a monolingual corpus is rarely available, and we instead use the MRs available in the training data to learn a language model of the MRL.
    Page 2, “MT—based semantic parsing”
  4. sequences of decorated MRL tokens) that maximize the weighted sum of the translation score (the probabilities of the translations according to the rule translation table) and the language model score, a process usually referred to as decoding.
    Page 3, “MT—based semantic parsing”
  5. The best symmetrization algorithm, translation and language model weights for each language are selected using cross-validation on the development set.
    Page 3, “Experimental setup”
  6. The first is the incorporation of a language model (or comparable long-distance structure-scoring model) to assign scores to predicted parses independent of the transformation model.
    Page 5, “Discussion”

See all papers in Proc. ACL 2013 that mention language model.

See all papers in Proc. ACL that mention language model.

Back to top.

meaning representation

Appears in 6 sentences as: meaning representation (4) meaning representations (2)
In Semantic Parsing as Machine Translation
  1. Semantic parsing is the problem of deriving a structured meaning representation from a natural language utterance.
    Page 1, “Abstract”
  2. Semantic parsing (SP) is the problem of transforming a natural language (NL) utterance into a machine-interpretable meaning representation (MR).
    Page 1, “Introduction”
  3. At least superficially, SP is simply a machine translation (MT) task: we transform an NL utterance in one language into a statement of another (unnatural) meaning representation language (MRL).
    Page 1, “Introduction”
  4. Linearization We assume that the MRL is variable-free (that is, the meaning representation for each utterance is tree-shaped), noting that for-malisms with variables, like the A-calculus, can be mapped onto variable-free logical forms with combinatory logics (Curry et al., 1980).
    Page 2, “MT—based semantic parsing”
  5. Other work which generalizes from variable-free meaning representations to A-calculus expressions includes the natural language generation procedure described by Lu and Ng (2011).
    Page 4, “Related Work”
  6. We have presented a semantic parser which uses techniques from machine translation to learn mappings from natural language to variable-free meaning representations .
    Page 5, “Conclusions”

See all papers in Proc. ACL 2013 that mention meaning representation.

See all papers in Proc. ACL that mention meaning representation.

Back to top.

natural language

Appears in 5 sentences as: natural language (5)
In Semantic Parsing as Machine Translation
  1. Semantic parsing is the problem of deriving a structured meaning representation from a natural language utterance.
    Page 1, “Abstract”
  2. Semantic parsing (SP) is the problem of transforming a natural language (NL) utterance into a machine-interpretable meaning representation (MR).
    Page 1, “Introduction”
  3. cityid, which in some training examples is unary) to align with different natural language strings depending on context.
    Page 2, “MT—based semantic parsing”
  4. Other work which generalizes from variable-free meaning representations to A-calculus expressions includes the natural language generation procedure described by Lu and Ng (2011).
    Page 4, “Related Work”
  5. We have presented a semantic parser which uses techniques from machine translation to learn mappings from natural language to variable-free meaning representations.
    Page 5, “Conclusions”

See all papers in Proc. ACL 2013 that mention natural language.

See all papers in Proc. ACL that mention natural language.

Back to top.