Abstract | We also present a new transition-based dependency parsing algorithm that gives a complexity of 0(n) for projective parsing and an expected linear time speed for non-projective parsing. |
Conclusion | Coupled with our new hybrid parsing algorithm , ADAGRAD, rich nonlocal features, and bootstrapping, our parser gives higher parsing accuracy than most other transition-based dependency parsers in multiple languages and shows faster parsing speed. |
Experiments | Our greedy approach outperforms both Nivre (2009) and Fernandez-Gonzalez and Gomez-Rodriguez (2012) who use different non-projective parsing algorithms . |
Introduction | We first present a new transition-based parsing algorithm that gives a complexity of 0(n) for projective parsing and an expected linear time speed for non-projective parsing. |
Related work | Our parsing algorithm is most similar to Choi and Palmer (2011) who integrated our LEFT-REDUCE transition into Nivre’s list-based algorithm. |
Related work | There are other transition-based dependency parsing algorithms that take a similar approach; Nivre (2009) integrated a SWAP transition into Nivre’s arc-standard algorithm (Nivre, 2004) and Fernandez-Gonzalez and Gomez-Rodriguez (2012) integrated a buffer transition into Nivre’s arc-eager algorithm to handle non-projectivity. |
Transition-based dependency parsing | We introduce a transition-based dependency parsing algorithm that is a hybrid between Nivre’s arc-eager and list-based algorithms (Nivre, 2003; Nivre, 2008). |
Transition-based dependency parsing | Nivre’s arc-eager is a projective parsing algorithm showing a complexity of Nivre’s list-based algorithm is a non-projective parsing algorithm showing a complexity of 0(n2). |
Transition-based dependency parsing | 2The parsing complexity of a transition-based dependency parsing algorithm is determined by the number of transitions performed with respect to the number of tokens in a sentence, say n (Kubler et a1., 2009). |
Abstract | Our parser builds a discourse tree by applying an optimal parsing algorithm to probabilities inferred from two Conditional Random Fields: one for intra-sentential parsing and the other for multi-sentential parsing. |
Conclusion | In this paper, we have presented a novel discourse parser that applies an optimal parsing algorithm to probabilities inferred from two CRF models: one for intra-sentential parsing and the other for multi-sentential parsing. |
Introduction | Second, existing parsers apply greedy and suboptimal parsing algorithms to build the DT for a document. |
Our Discourse Parsing Framework | Both of our parsers have the same two components: a parsing model assigns a probability to every possible DT, and a parsing algorithm identifies the most probable DT among the candidate DTs in that scenario. |
Our Discourse Parsing Framework | While the two models are rather different, the same parsing algorithm is shared by the two modules. |
Our Discourse Parsing Framework | Before describing our parsing models and the parsing algorithm , we introduce some terminology that we will use throughout the paper. |
Parsing Models and Parsing Algorithm | Once we obtain the probability of all possible DT constituents, the discourse sub-trees for the sentences are built by applying an optimal probabilistic parsing algorithm (Section 4.4) using one of the methods described in Section 5. |
Parsing Models and Parsing Algorithm | 4.4 Parsing Algorithm |
Parsing Models and Parsing Algorithm | Given the probability of all possible DT constituents in the intra-sentential and multi-sentential scenarios, the job of the parsing algorithm is to find the most probable DT for that scenario. |
Conclusion | The parsing algorithms described in this paper are implemented in the Boli-nas toolkit.3 |
Parsing | Just as CKY deals with substrings (i, j] of the input, the HRG parsing algorithm deals with edge-induced subgraphs I of the input. |
Parsing | This result, in turn, will allow us to bound the complexity of the parsing algorithm in terms of the treewidth of G. |
Parsing | We present the parsing algorithm as a deductive system (Shieber et al., 1995). |
Synchronous Parsing | In the synchronous parsing algorithm , passive items have the form [A, I , X, I’, X’] and active items have the form [A —> R : R’,n,I,¢,I’,¢’]. |
Synchronous Parsing | The complexity of the parsing algorithm is again in 0((3dn)k+1), where k is now the maximum treewidth of the dependency graph as defined in this section. |
Abstract | We introduce a shift-reduce parsing algorithm for phrase-based string-to-dependency translation. |
Introduction | While parsing algorithms can be used to parse partial translations in phrase-based decoding, the search space is significantly enlarged since there are exponentially many parse trees for exponentially many translations. |
Introduction | In this paper, we propose a shift-reduce parsing algorithm for phrase-based string-to-dependency translation. |
Introduction | 4. exploiting syntactic information: as the shift-reduce parsing algorithm generates target language dependency trees in decoding, dependency language models (Shen et al., 2008; Shen et al., 2010) can be used to encourage linguistically-motivated reordering. |
Bk <— BESTS(X,Bk_1,W) | The parsing algorithm is greedy which explores a tiny fraction of the search space. |
Easy-first dependency parsing | The easy-first dependency parsing algorithm (Goldberg and Elhadad, 2010) builds a dependency tree by performing two types of actions LEFT(i) and RIGHT(i) to a list of subtree structures p1,. |
Introduction | The easy-first dependency parsing algorithm (Goldberg and Elhadad, 2010) is attractive due to its good accuracy, fast speed and simplicity. |