Index of papers in Proc. ACL 2014 that mention
  • syntactic parsing
Krishnamurthy, Jayant and Mitchell, Tom M.
Abstract
The trained parser produces a full syntactic parse of any sentence, while simultaneously producing logical forms for portions of the sentence that have a semantic representation within the parser’s predicate vocabulary.
Introduction
Integrating syntactic parsing with semantics has long been a goal of natural language processing and is expected to improve both syntactic and semantic processing.
Introduction
For example, semantics could help predict the differing prepositional phrase attachments in “I caught the butterfly with the net” and “I caught the butterfly with the spots A joint analysis could also avoid propagating syntactic parsing errors into semantic processing, thereby improving performance.
Introduction
ideally improve the parser’s ability to solve difficult syntactic parsing problems, as in the examples above.
Prior Work
This paper combines two lines of prior work: broad coverage syntactic parsing with CCG and semantic parsing.
Prior Work
Broad coverage syntactic parsing with CCG has produced both resources and successful parsers.
Prior Work
The parser presented in this paper can be viewed as a combination of both a broad coverage syntactic parser and a semantic parser trained using distant supervision.
syntactic parsing is mentioned in 24 sentences in this paper.
Topics mentioned in this paper:
Hermann, Karl Moritz and Das, Dipanjan and Weston, Jason and Ganchev, Kuzman
Abstract
We present a novel technique for semantic frame identification using distributed representations of predicates and their syntactic context; this technique leverages automatic syntactic parses and a generic set of word embeddings.
Discussion
combination of two syntactic parsers as input.
Frame Identification with Embeddings
Formally, let cc represent the actual sentence with a marked predicate, along with the associated syntactic parse tree; let our initial representation of the predicate context be Suppose that the word embeddings we start with are of dimension n. Then 9 is a function from a parsed sentence cc to Rm“, where k is the number of possible syntactic context types.
Overview
We could represent the syntactic context of runs as a vector with blocks for all the possible dependents warranted by a syntactic parser ; for example, we could assume that positions 0 .
syntactic parsing is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Li, Sujian and Wang, Liang and Cao, Ziqiang and Li, Wenjie
Add arc <eC,ej> to GC with
The other two types of features which are related to length and syntactic parsing , only promote the performance slightly.
Add arc <eC,ej> to GC with
Since the RST tree is similar to the constituency based syntactic tree except that the constituent nodes are different, the syntactic parsing techniques have been borrowed for discourse parsing (Soricut and Marcu, 2003; Baldridge and Lascarides, 2005; Sagae, 2009; Hernault et al., 2010b; Feng and Hirst, 2012).
Introduction
Since such a hierarchical discourse tree is analogous to a constituency based syntactic tree except that the constituents in the discourse trees are text spans, previous researches have explored different constituency based syntactic parsing techniques (eg.
Introduction
First, it is difficult to design a set of production rules as in syntactic parsing , since there are no determinate generative rules for the interior text spans.
syntactic parsing is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Li, Junhui and Marton, Yuval and Resnik, Philip and Daumé III, Hal
Experiments
To obtain syntactic parse trees and semantic roles on the tuning and test datasets, we first parse the source sentences with the Berkeley Parser (Petrov and Klein, 2007), trained on the Chinese Treebank 7.0 (Xue et al., 2005).
Experiments
Since the syntactic parses of the tuning and test data contain 29 types of constituent labels and 35 types of POS tags, we have 29 types of XP+ features and 64 types of XP= features.
Related Work
The reordering rules were either manually designed (Collins et al., 2005; Wang et al., 2007; Xu et al., 2009; Lee et al., 2010) or automatically learned (Xia and McCord, 2004; Gen-zel, 2010; Visweswariah et al., 2010; Khalilov and Sima’an, 2011; Lerner and Petrov, 2013), using syntactic parses .
Related Work
(2012) obtained word order by using a reranking approach to reposition nodes in syntactic parse trees.
syntactic parsing is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Zhang, Meishan and Zhang, Yue and Che, Wanxiang and Liu, Ting
Character-Level Dependency Tree
A transition-based framework with global learning and beam search decoding (Zhang and Clark, 2011) has been applied to a number of natural language processing tasks, including word segmentation, PCS-tagging and syntactic parsing (Zhang and Clark, 2010; Huang and Sagae, 2010; Bohnet and Nivre, 2012; Zhang et al., 2013).
Character-Level Dependency Tree
Both are crucial to well-established features for word segmentation, PCS-tagging and syntactic parsing .
Character-Level Dependency Tree
(2013) was the first to perform Chinese syntactic parsing over characters.
Introduction
Second, word internal structures can also be useful for syntactic parsing .
syntactic parsing is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Bhat, Suma and Xue, Huichao and Yoon, Su-Youn
Conclusions
Seeking alternatives to measuring syntactic complexity of spoken responses via syntactic parsers , we study a shallow-analysis based approach for use in automatic scoring.
Related Work
Not surprisingly, Chen and Zechner (2011) studied measures of grammatical complexity via syntactic parsing and found that a Pearson’s correlation coefficient of 0.49 between syntactic complexity measures (derived from manual transcriptions) and proficiency scores, was drastically reduced to near nonexistence when the measures were applied to ASR word hypotheses.
Shallow-analysis approach to measuring syntactic complexity
The measures of syntactic complexity in this approach are POS bigrams and are not obtained by a deep analysis ( syntactic parsing ) of the structure of the sentence.
syntactic parsing is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Gormley, Matthew R. and Mitchell, Margaret and Van Durme, Benjamin and Dredze, Mark
Approaches
In Section 3.1, we introduced pipeline-trained models for SRL, which used grammar induction to predict unlabeled syntactic parses .
Related Work
In this simple pipeline, the first stage syntactically parses the corpus, and the second stage predicts semantic predicate-argument structure for each sentence using the labels of the first stage as features.
Related Work
In our low-resource pipelines, we assume that the syntactic parser is given no labeled parses—however, it may optionally utilize the semantic parses as distant supervision.
syntactic parsing is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Ma, Ji and Zhang, Yue and Zhu, Jingbo
Abstract
Experiment on the SANCL 2012 shared task show that our approach achieves 93.15% average tagging accuracy, which is the best accuracy reported so far on this data set, higher than those given by ensembled syntactic parsers .
Conclusion
For future work, we would like to investigate the two-phase approach to more challenging tasks, such as web domain syntactic parsing .
Introduction
set, higher than those given by ensembled syntactic parsers .
syntactic parsing is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: