Index of papers in Proc. ACL that mention
  • semantic parsing
Poon, Hoifung and Domingos, Pedro
Abstract
OntoUSP builds on the USP unsupervised semantic parser by jointly forming ISA and IS-PART hierarchies of lambda-form clusters.
Background 2.1 Ontology Learning
It has been successfully applied to unsupervised learning for various NLP tasks such as coreference resolution (Poon and Domingos, 2008) and semantic parsing (Poon and Domingos, 2009).
Background 2.1 Ontology Learning
2.3 Unsupervised Semantic Parsing
Background 2.1 Ontology Learning
Semantic parsing aims to obtain a complete canonical meaning representation for input sentences.
semantic parsing is mentioned in 27 sentences in this paper.
Topics mentioned in this paper:
Riezler, Stefan and Simianer, Patrick and Haas, Carolin
Abstract
We show how to generate responses by grounding SMT in the task of executing a semantic parse of a translated query against a database.
Abstract
Experiments on the GEOQUERY database show an improvement of about 6 points in Fl-score for response-based learning over learning from references only on returning the correct answer from a semantic parse of a translated query.
Grounding SMT in Semantic Parsing
In this paper, we present a proof-of-concept of our ideas of embedding SMT into simulated world environments as used in semantic parsing .
Grounding SMT in Semantic Parsing
Embedding SMT in a semantic parsing scenario means to define translation quality by the ability of a semantic parser to construct a meaning representation from the translated query, which returns the correct answer when executed against the database.
Grounding SMT in Semantic Parsing
The diagram in Figure 1 gives a sketch of response-based learning from semantic parsing in the geographical domain.
Introduction
In this paper, we propose a novel approach for learning and evaluation in statistical machine translation (SMT) that borrows ideas from response-based learning for grounded semantic parsing .
Introduction
Building on prior work in grounded semantic parsing, we generate translations of queries, and receive feedback by executing semantic parses of translated queries against the database.
Introduction
Successful response is defined as receiving the same answer from the semantic parses for the translation and the original query.
Related Work
For example, in semantic parsing , the learning goal is to produce and successfully execute a meaning representation.
Related Work
Recent attempts to learn semantic parsing from question-answer pairs without recurring to annotated logical forms have been presented by Kwiatowski et al.
Related Work
The algorithms presented in these works are variants of structured prediction that take executability of semantic parses into account.
semantic parsing is mentioned in 26 sentences in this paper.
Topics mentioned in this paper:
Liang, Percy and Jordan, Michael and Klein, Dan
Abstract
Compositional question answering begins by mapping questions to logical forms, but training a semantic parser to perform this mapping typically requires the costly annotation of the target logical forms.
Abstract
On two standard semantic parsing benchmarks (GEO and JOBS), our system obtains the highest published accuracies, despite requiring no annotated logical forms.
Experiments
(2010) (henceforth, SEMRESP), which also learns a semantic parser from question-answer pairs.
Experiments
Next, we compared our systems (DCS and DCS+) with the state-of-the-art semantic parsers on the full dataset for both GEO and JOBS (see Table 3).
Introduction
Answering these types of complex questions compositionally involves first mapping the questions into logical forms ( semantic parsing ).
Introduction
Supervised semantic parsers (Zelle and Mooney, 1996; Tang and Mooney, 2001; Ge and Mooney, 2005; Zettlemoyer and Collins, 2005; Kate and Mooney, 2007; Zettlemoyer and Collins, 2007; Wong and Mooney, 2007; Kwiatkowski et al., 2010) rely on manual annotation of logical forms, which is expensive.
Introduction
On the other hand, existing unsupervised semantic parsers (Poon and Domingos, 2009) do not handle deeper linguistic phenomena such as quantification, negation, and superlatives.
Semantic Parsing
Model We now present our discriminative semantic parsing model, which places a log-linear distribution over 2 E Z L(x) given an utterance X.
semantic parsing is mentioned in 13 sentences in this paper.
Topics mentioned in this paper:
Lee, Kenton and Artzi, Yoav and Dodge, Jesse and Zettlemoyer, Luke
Abstract
We present an approach for learning context-dependent semantic parsers to identify and interpret time expressions.
Formal Overview
Both detection (Section 5) and resolution (Section 6) rely on the semantic parser to identify likely mentions and resolve them within context.
Introduction
In this paper, we present the first context-dependent semantic parsing approach for learning to identify and interpret time expressions, addressing all three challenges.
Introduction
Recently, methods for learning probabilistic semantic parsers have been shown to address such limitations (Angeli et al., 2012; Angeli and Uszkoreit, 2013).
Introduction
We propose to use a context-dependent semantic parser for both detection and resolution of time expressions.
Related Work
Semantic parsers map sentences to logical representations of their underlying meaning, e. g., Zelle
Related Work
introduced the idea of learning semantic parsers to resolve time expressions (Angeli et al., 2012) and showed that the approach can generalize to multiple languages (Angeli and Uszkoreit, 2013).
Related Work
Similarly, Bethard demonstrated that a hand-engineered semantic parser is also effective (Bethard, 2013b).
Representing Time
(2012), who introduced semantic parsing for this task.
semantic parsing is mentioned in 14 sentences in this paper.
Topics mentioned in this paper:
Chen, David
Background
Once a semantic parser is trained, it can be used at test time to transform novel instructions into formal navigation plans which are then carried out by a virtual robot (MacMahon et al., 2006).
Experiments
The second task is evaluating the performance of the semantic parsers trained on the disambiguated data.
Experiments
For the second and third tasks, we train a semantic parser on the automatically disambiguated data, and test on sentences from the third, unseen map.
Experiments
Other than the modifications discussed, we use the same components as their system including using KRISP to train the semantic parsers and using the execution module from MacMahon et al.
Online Lexicon Learning Algorithm
To train a semantic parser using KRISP (Kate and Mooney, 2006), they had to supply a MRG, a context-free grammar, for their formal navigation plan language.
semantic parsing is mentioned in 17 sentences in this paper.
Topics mentioned in this paper:
Andreas, Jacob and Vlachos, Andreas and Clark, Stephen
Abstract
Semantic parsing is the problem of deriving a structured meaning representation from a natural language utterance.
Abstract
Here we approach it as a straightforward machine translation task, and demonstrate that standard machine translation components can be adapted into a semantic parser .
Abstract
These results support the use of machine translation methods as an informative baseline in semantic parsing evaluations, and suggest that research in semantic parsing could benefit from advances in machine translation.
Introduction
Semantic parsing (SP) is the problem of transforming a natural language (NL) utterance into a machine-interpretable meaning representation (MR).
Introduction
Indeed, successful semantic parsers often resemble MT systems in several important respects, including the use of word alignment models as a starting point for rule extraction (Wong and Mooney, 2006; Kwiatkowski et al., 2010) and the use of automata such as tree transducers (Jones et al., 2012) to encode the relationship between NL and MRL.
Introduction
In this work we attempt to determine how accurate a semantic parser we can build by treating SP as a pure MT task, and describe pre- and postprocessing steps which allow structure to be preserved in the MT process.
MT—based semantic parsing
In order to learn a semantic parser using MT we linearize the MRs, learn alignments between the MRL and the NL, extract translation rules, and learn a language model for the MRL.
MT—based semantic parsing
In order to learn a semantic parser using MT we begin by converting these MRs to a form more similar to NL.
semantic parsing is mentioned in 20 sentences in this paper.
Topics mentioned in this paper:
Krishnamurthy, Jayant and Mitchell, Tom M.
Abstract
We present an approach to training a joint syntactic and semantic parser that combines syntactic training information from CCGbank with semantic training information from a knowledge base via distant supervision.
Introduction
We suggest that a large populated knowledge base should play a key role in syntactic and semantic parsing : in training the parser, in resolving syntactic ambiguities when the trained parser is applied to new text, and in its output semantic representation.
Introduction
This paper presents an approach to training a joint syntactic and semantic parser using a large background knowledge base.
Introduction
We demonstrate our approach by training a joint syntactic and semantic parser , which we call ASP.
Parameter Estimation
Given these resources, the algorithm described in this section produces parameters 6 for a semantic parser .
Parser Design
The features are designed to share syntactic information about a word across its distinct semantic realizations in order to transfer syntactic information from CCGbank to semantic parsing .
Prior Work
This paper combines two lines of prior work: broad coverage syntactic parsing with CCG and semantic parsing .
Prior Work
Meanwhile, work on semantic parsing has focused on producing semantic parsers for answering simple natural language questions (Zelle and Mooney, 1996; Ge and Mooney, 2005; Wong and Mooney, 2006; Wong and Mooney, 2007; Lu et al., 2008; Kate and Mooney, 2006; Zettlemoyer and Collins, 2005; Kwiatkowski et al., 2011).
Prior Work
Finally, some work has looked at applying semantic parsing to answer queries against large knowledge bases, such as YAGO (Yahya et al., 2012) and Freebase (Cai and Yates, 2013b; Cai and Yates, 2013a; Kwiatkowski et al., 2013; Be-rant et al., 2013).
semantic parsing is mentioned in 19 sentences in this paper.
Topics mentioned in this paper:
Berant, Jonathan and Liang, Percy
Abstract
A central challenge in semantic parsing is handling the myriad ways in which knowledge base predicates can be expressed.
Abstract
Traditionally, semantic parsers are trained primarily from text paired with knowledge base information.
Abstract
In this paper, we turn semantic parsing on its head.
Introduction
We consider the semantic parsing problem of mapping natural language utterances into logical forms to be executed on a knowledge base (KB) (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Kwiatkowski et al., 2010).
Introduction
Scaling semantic parsers to large knowledge bases has attracted substantial attention recently (Cai and Yates, 2013; Berant et al., 2013; Kwiatkowski et al., 2013), since it drives applications such as question answering (QA) and information extraction (IE).
Introduction
Semantic parsers need to somehow associate natural language phrases with logical predicates, e.g., they must learn that the constructions “What
semantic parsing is mentioned in 22 sentences in this paper.
Topics mentioned in this paper:
Cai, Qingqing and Yates, Alexander
Abstract
Supervised training procedures for semantic parsers produce high-quality semantic parsers , but they have difficulty scaling to large databases because of the sheer number of logical constants for which they must see labeled training data.
Abstract
We present a technique for developing semantic parsers for large databases based on a reduction to standard supervised training algorithms, schema matching, and pattern learning.
Abstract
Leveraging techniques from each of these areas, we develop a semantic parser for Freebase that is capable of parsing questions with an F1 that improves by 0.42 over a purely-supervised learning algorithm.
Introduction
Semantic parsing is the task of translating natural language utterances to a formal meaning representation language (Chen et al., 2010; Liang et al., 2009; Clarke et al., 2010; Liang et al., 2011; Artzi and Zettlemoyer, 2011).
Introduction
There has been recent interest in producing such semantic parsers for large, heterogeneous databases like Freebase (Krishnamurthy and Mitchell, 2012; Cai and Yates, 2013) and Yago2 (Yahya et al., 2012), which has driven the development of semi-supervised and distantly-supervised training methods for semantic parsing .
Introduction
This paper investigates a reduction of the problem of building a semantic parser to three standard problems in semantics and machine learning: supervised training of a semantic parser , schema matching, and pattern learning.
semantic parsing is mentioned in 48 sentences in this paper.
Topics mentioned in this paper:
Ge, Ruifang and Mooney, Raymond
Abstract
We present a new approach to learning a semantic parser (a system that maps natural language sentences into logical form).
Experimental Evaluation
Second, a semantic parser was learned from the training set augmented with their syntactic parses.
Experimental Evaluation
Finally, the learned semantic parser was used to generate the MRs for the test sentences using their syntactic parses.
Experimental Evaluation
We measured the performance of semantic parsing using precision (percentage of returned MRs that were correct), recall (percentage of test examples with correct MRs returned), and F -measure (harmonic mean of precision and recall).
Introduction
Semantic parsing is the task of mapping a natural language (NL) sentence into a completely formal meaning representation (MR) or logical form.
Introduction
A number of systems for automatically learning semantic parsers have been proposed (Ge and Mooney, 2005; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Lu et al., 2008).
Introduction
Previous methods for learning semantic parsers do not utilize an existing syntactic parser that provides disambiguated parse trees.1 However, accurate syntactic parsers are available for many
Learning a Disambiguation Model
Here, unique word alignments are not required, and alternative interpretations compete for the best semantic parse .
Semantic Parsing Framework
The process of generating the semantic parse for an NL sentence is as follows.
semantic parsing is mentioned in 17 sentences in this paper.
Topics mentioned in this paper:
Bao, Junwei and Duan, Nan and Zhou, Ming and Zhao, Tiejun
Abstract
Compared to a KB-QA system using a state-of-the-art semantic parser , our method achieves better results.
Introduction
Most previous systems tackle this task in a cascaded manner: First, the input question is transformed into its meaning representation (MR) by an independent semantic parser (Zettlemoyer and Collins, 2005; Mooney, 2007; Artzi and Zettlemoyer, 2011; Liang et al., 2011; Cai and Yates,
Introduction
Unlike existing KB-QA systems which treat semantic parsing and answer retrieval as two cascaded tasks, this paper presents a unified framework that can integrate semantic parsing into the question answering procedure directly.
Introduction
The contributions of this work are twofold: (1) We propose a translation-based KB-QA method that integrates semantic parsing and QA in one unified framework.
semantic parsing is mentioned in 11 sentences in this paper.
Topics mentioned in this paper:
Poon, Hoifung
Abstract
We present the first unsupervised approach for semantic parsing that rivals the accuracy of supervised approaches in translating natural-language questions to database queries.
Abstract
Our GUSP system produces a semantic parse by annotating the dependency-tree nodes and edges with latent states, and learns a probabilistic grammar using EM.
Background
2.1 Semantic Parsing
Background
The goal of semantic parsing is to map text to a complete and detailed meaning representation (Mooney, 2007).
Introduction
Semantic parsing maps text to a formal meaning representation such as logical forms or structured queries.
Introduction
Recently, there has been a burgeoning interest in developing machine-leaming approaches for semantic parsing (Zettlemoyer and Collins, 2005; Zettlemoyer and Collins, 2007; Mooney, 2007; Kwiatkowski et al., 2011), but the predominant paradigm uses supervised learning, which requires example annotations that are costly to obtain.
Introduction
Poon & Domingos (2009, 2010) proposed the USP system for unsupervised semantic parsing , which learns a parser by recursively clustering and composing synonymous expressions.
semantic parsing is mentioned in 43 sentences in this paper.
Topics mentioned in this paper:
Goldwasser, Dan and Roth, Dan
Abstract
Semantic parsing is a domain-dependent process by nature, as its output is defined over a set of domain symbols.
Conclusions
In this paper, we took a first step towards a new kind of generalization in semantic parsing : constructing a model that is able to generalize to a new domain defined over a different set of symbols.
Experimental Settings
The dataset was collected for the purpose of constructing semantic parsers from ambiguous supervision and consists of both “noisy” and gold labeled data.
Experimental Settings
Semantic Interpretation Tasks We consider two of the tasks described in (Chen and Mooney, 2008) (1) Semantic Parsing requires generating the correct logical form given an input sentence.
Introduction
However, current work on automated NL understanding (typically referenced as semantic parsing (Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Chen and Mooney, 2008; Kwiatkowski et al., 2010; Bo'rschinger et al., 2011)) is restricted to a given output domain1 (or task) consisting of a closed set of meaning representation symbols, describing domains such as robotic soccer, database queries and flight ordering systems.
Introduction
In order to understand this difficulty, a closer look at semantic parsing is required.
Introduction
: Information in Semantic Parsing
semantic parsing is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Kim, Joohyun and Mooney, Raymond
Abstract
Unlike conventional reranking used in syntactic and semantic parsing , gold-standard reference trees are not naturally available in a grounded setting.
Background
More specifically, one must learn a semantic parser that produces a plan pj using a formal meaning representation (MR) language introduced by Chen and Mooney (2011).
Experimental Evaluation
Table 1 shows oracle accuracy for both semantic parsing and plan execution for single sentence and complete paragraph instructions for various values of n. For oracle parse accuracy, for each sentence, we pick the parse that gives the highest Fl score.
Experimental Evaluation
Ge and Mooney (2006) employ a similar approach when reranking semantic parses .
Introduction
Reranking has been successfully employed to improve syntactic parsing (Collins, 2002b), semantic parsing (Lu et al., 2008; Ge and Mooney, 2006), semantic role labeling (Toutanova et al., 2005), and named entity recognition (Collins, 2002c).
Modified Reranking Algorithm
In a similar vein, when reranking semantic parses , Ge and Mooney (2006) chose as a reference parse the one which was most similar to the gold-standard semantic annotation.
Related Work
It has been shown to be effective for various natural language processing tasks including syntactic parsing (Collins, 2000; Collins, 2002b; Collins and Koo, 2005; Charniak and Johnson, 2005; Huang, 2008), semantic parsing (Lu et al., 2008; Ge and Mooney, 2006), part-of-speech tagging (Collins, 2002a), semantic role labeling (Toutanova et al., 2005), named entity recognition (Collins, 2002c).
Related Work
to work on learning semantic parsers from execution output such as the answers to database queries (Clarke et al., 2010; Liang et al., 2011).
semantic parsing is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Yao, Xuchen and Van Durme, Benjamin
Abstract
Answering natural language questions using the Freebase knowledge base has recently been explored as a platform for advancing the state of the art in open domain semantic parsing .
Background
Finally, the KB community has developed other means for QA without semantic parsing (Lopez et al., 2005; Frank et al., 2007; Unger et al., 2012; Yahya et al., 2012; Shekarpour et al., 2013).
Conclusion
We hope that this result establishes a new baseline against which semantic parsing researchers can measure their progress towards deeper language understanding and answering of human questions.
Experiments
One question of interest is whether our system, aided by the massive web data, can be fairly compared to the semantic parsing approaches (note that Berant et al.
Introduction
The AI community has tended to approach this problem with a focus on first understanding the intent of the question, Via shallow or deep forms of semantic parsing (c.f.
Introduction
bounded by the accuracy of the original semantic parsing , and the well-formedness of resultant database queries.1
Introduction
Researchers in semantic parsing have recently explored QA over Freebase as a way of moving beyond closed domains such as GeoQuery (Tang and Mooney, 2001).
semantic parsing is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Cai, Shu and Knight, Kevin
Abstract
The evaluation of whole-sentence semantic structures plays an important role in semantic parsing and large-scale semantic structure annotation.
Introduction
The goal of semantic parsing is to generate all semantic relationships in a text.
Introduction
Evaluating such structures is necessary for semantic parsing tasks, as well as semantic annotation tasks which create linguistic resources for semantic parsing .
Introduction
Current whole-sentence semantic parsing is mainly evaluated in two ways: 1. task correctness (Tang and Mooney, 2001), which evaluates on an NLP task that uses the parsing results; 2. whole-sentence accuracy (Zettlemoyer and Collins, 2005), which counts the number of sentences parsed completely correctly.
Related Work
Related work on directly measuring the semantic representation includes the method in (Dri-dan and Oepen, 2011), which evaluates semantic parser output directly by comparing semantic substructures, though they require an alignment between sentence spans and semantic substructures.
Using Smatch
(Jones et al., 2012) use it to evaluate automatic semantic parsing in a narrow domain, while Ulf Her-mjakob4 has developed a heuristic algorithm that exploits and supplements Ontonotes annotations (Pradhan et al., 2007) in order to automatically create AMRs for Ontonotes sentences, with a smatch score of 0.74 against human consensus AMRs.
semantic parsing is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Liu, Changsong and She, Lanbo and Fang, Rui and Chai, Joyce Y.
Evaluation and Discussion
We first applied the semantic parser and coreference classifier as described in Section 4.1 to process each dialogue, and then built a graph representation based on the automatic processing results at the end of the dialogue.
Probabilistic Labeling for Reference Grounding
Our system first processes the data using automatic semantic parsing and coreference resolution.
Probabilistic Labeling for Reference Grounding
For semantic parsing , we use a rule-based CCG parser (Bozsahin et al., 2005) to parse each utterance into a formal semantic representation.
Probabilistic Labeling for Reference Grounding
Based on the semantic parsing and pairwise coreference resolution results, our system further builds a graph representation to capture the collaborative discourse and formulate referential grounding as a probabilistic labeling problem, as described next.
Related Work
These works have provided valuable insights on how to manually and/or automatically build key components (e.g., semantic parsing , grounding functions between visual features and words, mapping procedures) for a situated referential grounding system.
semantic parsing is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Lo, Chi-kiu and Beloucif, Meriem and Saers, Markus and Wu, Dekai
Related Work
MEANT is easily portable to other languages, requiring only an automatic semantic parser and a large monolingual corpus in the output language for identifying the semantic structures and the lexical similarity between the semantic role fillers of the reference and translation.
Related Work
Apply an input language automatic shallow semantic parser to the foreign input and an output language automatic shallow semantic parser totheMToutput.
Related Work
(Figure 2 shows examples of automatic shallow semantic parses on both foreign input and MT output.
XMEANT: a cross-lingual MEANT
To aggregate individual lexical translation probabilities into phrasal similarities between cross-lingual semantic role fillers, we compared two natural approaches to generalizing MEANT’s method of comparing semantic parses , as described below.
XMEANT: a cross-lingual MEANT
The first natural approach is to extend MEANT’s f-score based method of aggregating semantic parse accuracy, so as to also apply to aggregat-
semantic parsing is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Xie, Boyi and Passonneau, Rebecca J. and Wu, Leon and Creamer, Germán G.
Discussion
7.1 Semantic Parse Quality
Discussion
On a small, randomly selected sample of sentences from all three sectors, two of the authors working independently evaluated the semantic parses , with approximately 80% agreement.
Methods
The semantic parses of both methods are derived from SEMAFOR1 (Das and Smith, 2012; Das and Smith, 2011), which solves the semantic parsing problem by rule-based target identification, log-linear model based frame identification and frame element filling.
Methods
The top of Figure 2 shows the semantic parse for sentence a from section 2; we use it to illustrate tree construction for designated object Oracle.
Related Work
We explore a rich feature space that relies on frame semantic parsing .
semantic parsing is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Flanigan, Jeffrey and Thomson, Sam and Carbonell, Jaime and Dyer, Chris and Smith, Noah A.
Introduction
Semantic parsing is the problem of mapping natural language strings into meaning representations.
Introduction
1To date, a graph transducer-based semantic parser has not been published, although the Bolinas toolkit (http://WWW.isi.eolu/publications/ licensed— sw/bolinas /) contains much of the necessary infrastructure.
Introduction
comparison of these two approaches is beyond the scope of this paper, we emphasize that—as has been observed with dependency parsing—a diversity of approaches can shed light on complex problems such as semantic parsing .
Related Work
While all semantic parsers aim to transform natural language text to a formal representation of its meaning, there is wide variation in the meaning representations and parsing techniques used.
Related Work
In contrast, semantic dependency parsing—in which the vertices in the graph correspond to the words in the sentence—is meant to make semantic parsing feasible for broader textual domains.
semantic parsing is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Lo, Chi-kiu and Wu, Dekai
Abstract
We then replace the human semantic role annotators with automatic shallow semantic parsing to further automate the evaluation metric, and show that even the semiautomated evaluation metric achieves a 0.34 correlation coefficient with human adequacy judgment, which is still about 80% as closely correlated as HTER despite an even lower labor co st for the evaluation procedure.
Abstract
Finally, we show that replacing the human semantic role labelers with an automatic shallow semantic parser in our proposed metric yields an approximation that is about 80% as closely correlated with human judgment as HTER, at an even lower cost—and is still far better correlated than n-gram based evaluation metrics.
Abstract
It is now worth asking a deeper question: can we further reduce the labor cost of MEANT by using automatic shallow semantic parsing instead of humans for semantic role labeling?
semantic parsing is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Titov, Ivan and Kozhevnikov, Mikhail
A Model of Semantics
This is a weaker form of supervision than the one traditionally considered in supervised semantic parsing , where the alignment is also usually provided in training (Chen and Mooney, 2008; Zettlemoyer and Collins, 2005).
Empirical Evaluation
semantic parsing ) accuracy is not possible on this dataset, as the data does not contain information which fields are discussed.
Inference with NonContradictory Documents
The alignment a defines how semantics is verbalized in the text w, and it can be represented by a meaning derivation tree in case of full semantic parsing (Poon and Domingos, 2009) or, e.g., by a hierarchical segmentation into utterances along with an utterance-field alignment in a more shallow variation of the problem.
Inference with NonContradictory Documents
In semantic parsing , we aim to find the most likely underlying semantics and alignment given the text:
Introduction
In recent years, there has been increasing interest in statistical approaches to semantic parsing .
semantic parsing is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Angeli, Gabor and Uszkoreit, Jakob
Abstract
We present a language independent semantic parser for learning the interpretation of temporal phrases given only a corpus of utterances and the times they reference.
Evaluation
0 ParsingTime (Angeli et al., 2012), a semantic parser for temporal expressions, similar to this system (see Section 2).
Related Work
As in this previous work, our approach draws inspiration from work on semantic parsing .
Related Work
Supervised approaches to semantic parsing prominently include Zelle and Mooney (1996), Zettlemoyer and Collins (2005), Kate et al.
semantic parsing is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Croce, Danilo and Giannone, Cristina and Annesi, Paolo and Basili, Roberto
Conclusions
The obtained results are close to the state-of-art in FrameNet semantic parsing .
Introduction
The availability of large scale semantic lexicons, such as FrameNet (Baker et al., 1998), allowed the adoption of a Wide family of learning paradigms in the automation of semantic parsing .
Introduction
The above problems are particularly critical for frame-based shallow semantic parsing where, as opposed to more syntactic-oriented semantic labeling schemes (as Propbank (Palmer et al., 2005)), a significant mismatch exists between the semantic descriptors and the underlying syntactic annotation level.
Related Work
In (J ohansson and Nugues, 2008b) the impact of different grammatical representations on the task of frame-based shallow semantic parsing is studied and the poor lexical generalization problem is outlined.
semantic parsing is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Lei, Tao and Long, Fan and Barzilay, Regina and Rinard, Martin
Abstract
Our results show that our approach achieves 80.0% F-Score accuracy compared to an F-Score of 66.7% produced by a state-of-the-art semantic parser on a dataset of input format specifications from the ACM International Collegiate Programming Contest (which were written in English for humans with no intention of providing support for automated processing).1
Experimental Setup
The second baseline Aggressive is a state-of-the-art semantic parsing framework (Clarke et al., 2010).8 The framework repeatedly predicts hidden structures (specification trees in our case) using a structure learner, and trains the structure learner based on the execution feedback of its predictions.
Introduction
However, when trained using the noisy supervision, our method achieves substantially more accurate translations than a state-of-the-art semantic parser (Clarke et al., 2010) (specifically, 80.0% in F—Score compared to an F-Score of 66.7%).
semantic parsing is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: