Compressive Summarization | 8 efficiently with dynamic programming (using the Viterbi algorithm for trees); the total cost is linear in Ln. |
Compressive Summarization | 8; this can be done in 0(Ln) time with dynamic programming , as discussed in §3.2. |
Compressive Summarization | This can be computed exactly in time 0(B 25:1 Lnk ) , through dynamic programming . |
Introduction | For example, such solvers are unable to take advantage of efficient dynamic programming routines for sentence compression (McDonald, 2006). |
MultiTask Learning | 9 can be used for the maximization above: for tasks #1—#2, we solve a relaxation by running AD3 without rounding, and for task #3 we use dynamic programming ; see Table 1. |
Document-level Parsing Approaches | We pick the subtree which has the higher probability in the two dynamic programming tables. |
Document-level Parsing Approaches | If the sentence has the same number of sub-trees in both DTp and DTn, we pick the one with higher probability in the dynamic programming tables. |
Parsing Models and Parsing Algorithm | Following (Joty et al., 2012), we implement a probabilistic CKY—like bottom-up algorithm for computing the most likely parse using dynamic programming . |
Parsing Models and Parsing Algorithm | Specifically, with n discourse units, we use the upper-triangular portion of the n><n dynamic programming table D. Given U$(0) and U$(1) are the start and end EDU Ids of unit U95: |
Conclusion and perspectives | We employed dynamic programming on hierarchies of indicators to compute the feature space providing the best pairwise classifications efficiently. |
Hierarchizing feature spaces | Now selecting the best space for one of these measures can be achieved by using dynamic programming techniques. |
Hierarchizing feature spaces | We use a dynamic programming technique to compute the best hierarchy by cutting this tree and only keeping classifiers situated at the leaf. |
Introduction | hierarchies with dynamic programming . |
Parallel Segment Retrieval | We shall describe our model to solve the first problem in 3.1 and our dynamic programming approach to make the inference tractable in 3.2. |
Parallel Segment Retrieval | We will show that dynamic programing can be used to make this problem tractable, using Model 1. |
Parallel Segment Retrieval | an iterative approach to compute the Viterbi word alignments for IBM Model 1 using dynamic programming . |
Introduction | Consequently the task of sentence alignment becomes handily solvable by means of such basic techniques as dynamic programming . |
Methodology 2.1 The Problem | Note that it is relatively straightforward to identify the type of many-to-many alignment in monotonic alignment using techniques such as dynamic programming if there is no scrambled pairing or the scrambled pairings are local, limited to a short distance. |
Methodology 2.1 The Problem | Both use dynamic programming to search for the best alignment. |
Methodology 2.1 The Problem | (2007), a generative model is proposed, accompanied by two specific alignment strategies, i.e., dynamic programming and divisive clustering. |
Evaluation | The first line is a baseline HMM using exact posterior computation and inference with the standard dynamic programming algorithms. |
HMM alignment | For the standard HMM, there is a dynamic programming algorithm to compute the posterior probability over word alignments Pr(a|e, f These are the sufficient statistics gathered in the E step of EM. |
HMM alignment | The structure of the fertility model violates the Markov assumptions used in this dynamic programming method. |
HMM alignment | Rather than maximizing each row totally independently, we keep track of the best configurations for each number of words generated in each row, and then pick the best combination that sums to J: another straightforward exercise in dynamic programming . |
Bilingual Infinite Tree Model | Beam sampling limits the number of possible state transitions for each node to a finite number using slice sampling (Neal, 2003), and then efficiently samples whole hidden state transitions using dynamic programming . |
Bilingual Infinite Tree Model | We can parallelize procedures in sampling u and z because the slice sampling for u and the dynamic programing for z are independent for each sentence. |
Bilingual Infinite Tree Model | ,T) using dynamic programming as follows: In the joint model, p(zt|:c0(t), no.0») oc |
Introduction | Inference is efficiently carried out by beam sampling (Gael et al., 2008), which combines slice sampling and dynamic programming . |
Introduction | Coupled with dynamic programming , transition-based dependency parsing with beam search can be done very efficiently and gives significant improvement to parsing accuracy. |
Related work | Huang and Sagae (2010) later applied dynamic programming to this approach and showed improved efficiency. |
Selectional branching | which can be done very efficiently when it is coupled with dynamic programming (Zhang and Clark, 2008; Huang and Sagae, 2010; Zhang and Nivre, 2011; Huang et a1., 2012; Bohnet and Nivre, 2012). |