Bottom-up tree-building | Therefore, our model incorporates the strengths of both HILDA and Joty et al.’s model, i.e., the efficiency of a greedy parsing algorithm , and the ability to incorporate sequential information with CRFs. |
Introduction | However, as an optimal discourse parser, Joty et al.’s model is highly inefficient in practice, with respect to both their DCRF-based local classifiers, and their CKY-like bottom-up parsing algorithm . |
Introduction | CKY parsing is a bottom-up parsing algorithm which searches all possible parsing paths by dynamic programming. |
Introduction | As a result of successfully avoiding the expensive non-greedy parsing algorithms , our discourse parser is very efficient in practice. |
Related work | Their model assigns a probability to each possible constituent, and a CKY-like parsing algorithm finds the globally optimal discourse tree, given the computed probabilities. |
Related work | The inefficiency lies in both their DCRF—based joint model, on which inference is usually slow, and their CKY-like parsing algorithm , whose issue is more prominent. |
Results and Discussion | gSVMFH in the second row is our implementation of HILDA’s greedy parsing algorithm using Feng and Hirst (2012)’s enhanced feature set. |
Results and Discussion | First, as shown in Table 2, the average number of sentences in a document is 26.11, which is already too large for optimal parsing models, e.g., the CKY—like parsing algorithm in jCRF, let alone the fact that the largest document contains several hundred of EDUs and sentences. |
Experimental Framework | Table 1: LAS results with several parsing algorithms , Penn2Ma1t conversion (T: p <0.05, 1;: p <0.005). |
Experimental Framework | Table 2: LAS results with several parsing algorithms in the LTH conversion (T: p <0.05, 1;: p <0.005). |
Results | We can also conclude that automatically acquired clusters are specially effective with the MST parser in both treebank conversions, which suggests that the type of semantic information has a direct relation to the parsing algorithm . |
Abstract | Although finding the “minimal” latent tree is NP-hard in general, for the case of projective trees we find that it can be found using bilexical parsing algorithms . |
Abstract | While this criterion is in general NP-hard (Desper and Gascuel, 2005), for projective trees we find that a bilexical parsing algorithm can be used to find an exact solution efficiently (Eisner and Satta, 1999). |
Abstract | However, if we restrict u to be in L1, as we do in the above, then maximizing over Ll can be solved using the bilexical parsing algorithm from Eisner and Satta (1999). |