Index of papers in Proc. ACL that mention
  • dynamic programming
Raghavan, Preethi and Fosler-Lussier, Eric and Elhadad, Noémie and Lai, Albert M.
Abstract
We address the problem of aligning multiple medical event sequences, corresponding to different clinical narratives, comparing the following approaches: (1) A novel weighted finite state transducer representation of medical event sequences that enables composition and search for decoding, and (2) Dynamic programming with iterative pairwise alignment of multiple sequences using global and local alignment algorithms.
Abstract
We present results using both approaches and observe that the finite state transducer approach performs performs significantly better than the dynamic programming one by 6.8% for the problem of multiple-sequence alignment.
Introduction
As a contrast, we adapt dynamic programming algorithms (Needleman et al., 1970, Smith and Waterman, 1981) used to produce global and local alignments for aligning sequences of medical events across narratives.
Introduction
dynamic programming or other ILP-based methods proposed in literature.
Problem Description
We propose a novel WFST—based representation that enables accurate decoding for MSA when compared to popularly used dynamic programming algorithms (Needleman et al., 1970, Smith and Waterman, 1981) or other state of the art methods (Do et al., 2012).
Problem Description
These scores are used in both the WFST—based representation and decoding, as well as for dynamic programming .
Problem Description
We also use popular dynamic programming algorithms (Needleman et al., 1970, Smith and Waterman, 1981) for sequence alignment of medical events across narratives and compare it to the WFST-based representation and decoding.
Related Work
Dynamic programming algorithms have been popularly leveraged to produce pairwise and global genetic alignments, where edit distance based metrics are used to compute the cost of insertions, deletions and substitutions.
Related Work
We use dynamic programming to compute the best alignment, given the temporal and coreference information between medical events across these sequences.
Related Work
We demonstrate that the WFST—based approach outperforms popularly used dynamic programming algorithms for multiple sequence alignment.
dynamic programming is mentioned in 23 sentences in this paper.
Topics mentioned in this paper:
Rush, Alexander M. and Collins, Michael
A Simple Lagrangian Relaxation Algorithm
(2) find the highest scoring derivation using dynamic programming
A Simple Lagrangian Relaxation Algorithm
Steps 1 and 2 can be performed efficiently; in particular, we avoid the classical dynamic programming intersection, instead relying on dynamic programming over the original, simple hypergraph.
A Simple Lagrangian Relaxation Algorithm
0 Using dynamic programming , find values for the yv and ye variables that form a valid derivation, and that maximize
Abstract
The approach uses Lagrangian relaxation to decompose the decoding problem into tractable sub-problems, thereby avoiding exhaustive dynamic programming .
Introduction
Exact dynamic programming algorithms for the problem are well known (Bar-Hillel et al., 1964), but are too expensive to be used in practice.2 Previous work on decoding for syntax-based SMT has therefore been focused primarily on approximate search methods.
Introduction
Dynamic programming over the weighted hypergraph.
Introduction
We do this by gradually introducing constraints to step 1 ( dynamic programming over the hypergraph), while still maintaining efficiency.
The Full Algorithm
The second step involves simple dynamic programming over the hypergraph (V, E) (it is simple to integrate the 68 terms into this algorithm).
The Full Algorithm
The main steps of the algorithm are: 1) construction of the graph (8, T); 2) at each iteration, dynamic programming over the hypergraph (V, E); 3) at each iteration, all-pairs shortest path algorithms over the graph (8, T).
The Full Algorithm
steps—hypergraph dynamic programming , and all-pairs shortest path—are widely known algorithms that are simple to implement.
dynamic programming is mentioned in 14 sentences in this paper.
Topics mentioned in this paper:
Zhang, Hao and Gildea, Daniel
Decoding to Maximize BLEU
Our algorithm is another dynamic programming decoding pass on the trigram forest, and is similar to the parsing algorithm for maximizing expected labelled recall presented by Goodman (1996).
Decoding to Maximize BLEU
The expression can be factorized and computed using dynamic programming on the forest.
Experiments
We can also use the dynamic programming hook trick.
Language Model Integrated Decoding for SCFG
From a dynamic programming point of view, the DP states are X [i, j], where X ranges over all possible nonterminals and i and j range over 0 to the input string length Each state stores the best translations obtainable.
Language Model Integrated Decoding for SCFG
Thus, to preserve the dynamic programming property, we need to refine the states by adding the boundary words into the parameterization.
Multi-pass LM-Integrated Decoding
(2005) factor-izes the dynamic programming steps and lowers the asymptotic complexity of the n-gram integrated decoding, but has not been implemented in large-scale systems where massive pruning is present.
Multi-pass LM-Integrated Decoding
(2007) also take a two-pass decoding approach, with the first pass leaving the language model boundary words out of the dynamic programming state, such that only one hypothesis is retained for each span and grammar symbol.
dynamic programming is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Thadani, Kapil
Abstract
While dynamic programming is viable for bigram-based sentence compression, finding optimal compressed trees within graphs is NP-hard.
Experiments
0 DP: The bigram-based dynamic program of McDonald (2006) described in §2.3.9
Introduction
However, while the former problem can be solved efficiently using the dynamic programming approach of McDonald (2006), there are no efficient algorithms to recover maximum weighted non-projective subtrees in a general directed graph.
Multi-Structure Sentence Compression
Following this, §2.3 discusses a dynamic program to find maximum weight bigram subsequences from the input sentence, while §2.4 covers LP relaxation-based approaches for approximating solutions to the problem of finding a maximum-weight subtree in a graph of potential output dependencies.
Multi-Structure Sentence Compression
McDonald (2006) provides a Viterbi-like dynamic programming algorithm to recover the highest-scoring sequence of order-preserving bigrams from a lattice, either in unconstrained form or with a specific length constraint.
Multi-Structure Sentence Compression
The latter requires a dynamic programming table [T] which represents the best score for a compression of length 7“ ending at token i.
dynamic programming is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Snyder, Benjamin and Naseem, Tahira and Barzilay, Regina
Abstract
We perform inference under this model using Markov Chain Monte Carlo and dynamic programming .
Introduction
Additionally, a computational advantage of this formalism is that the marginalized probability over all possible alignments for any two trees can be efficiently computed with a dynamic program in linear time.
Introduction
We sample pairs of trees and then compute marginalized probabilities over all possible alignments using dynamic programming .
Model
Fortunately, for any given pair of trees T1 and T2 this marginalization can be computed using a dynamic program in time O(|T1||T2|).
Model
A dynamic program builds this table from the bottom up: For each node pair 711,712, we sum the probabilities of all local alignment configurations, each multiplied by the appro-
Model
This algorithm is an adaptation of the dynamic program presented in (J iang et al., 1995) for finding minimum cost alignment trees (Fig.
dynamic programming is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Huang, Liang
Conclusion
We also devised a dynamic programming algorithm for forest oracles, an interesting problem by itself.
Forest Reranking
We will present a dynamic programming algorithm for this problem in Sec.
Forest Reranking
Analogous to the language model cost in forest rescoring, the unit feature cost here is a non-monotonic score in the dynamic programming backbone, and the derivations may thus be extracted oat-of-order.
Introduction
Alternatively, discriminative parsing is tractable with exact and efficient search based on dynamic programming (DP) if all features are restricted to be local, that is, only looking at a local window within the factored search space (Taskar et al., 2004; McDonald et al., 2005).
Supporting Forest Algorithms
We instead propose a dynamic programming algorithm which optimizes the number of matched brackets for a given number of test brackets.
dynamic programming is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Shen, Libin and Xu, Jinxi and Weischedel, Ralph
Introduction
We restrict the target side to the so called well-formed dependency structures, in order to cover a large set of non-constituent transfer rules (Marcu et al., 2006), and enable efficient decoding through dynamic programming .
Introduction
Dynamic Programming
Introduction
tation and make it hard to employ shared structures for efficient dynamic programming .
dynamic programming is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Mochihashi, Daichi and Yamada, Takeshi and Ueda, Naonori
Abstract
In this paper, we propose a new Bayesian model for fully unsupervised word segmentation and an efficient blocked Gibbs sampler combined with dynamic programming for inference.
Conclusion
With a combination of dynamic programming and an accurate spelling model from a Bayesian perspective, our model significantly outperforms the previous reported results, and the inference is very efficient.
Inference
Instead, we propose a sentence-wise Gibbs sampler of word segmentation using efficient dynamic programming , as shown in Figure 3.
Introduction
Section 4 describes an efficient blocked Gibbs sampler that leverages dynamic programming for inference.
Nested Pitman-Yor Language Model
To segment a string into “words”, we used efficient dynamic programming combined with MCMC, as described in the next section.
dynamic programming is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Jiang, Wenbin and Liu, Qun
Experiments
First, we implement a chart-based dynamic programming parser for the 2nd-0rdered MST model, and develop a training procedure based on the perceptron algorithm with averaged parameters (Collins, 2002).
Introduction
J iang and Liu (2009) resort to a dynamic programming procedure to search for a completed projected tree.
Related Works
Jiang and Liu (2009) refer to alignment matrix and a dynamic programming search algorithm to obtain better projected dependency trees.
Word-Pair Classification Model
Follow the edge based factorization method (Eisner, 1996), we factorize the score of a dependency tree s(x, y) into its dependency edges, and design a dynamic programming algorithm to search for the candidate parse with maximum score.
Word-Pair Classification Model
In this work, however, we still adopt the more general, bottom-up dynamic programming algorithm Algorithm 1 in order to facilitate the possible expansions.
dynamic programming is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Yamaguchi, Hiroshi and Tanaka-Ishii, Kumiko
Abstract
The problem is formulated in terms of obtaining the minimum description length of a text, and the proposed solution finds the segments and their languages through dynamic programming .
Conclusion
An actual procedure for obtaining an optimal result through dynamic programming was proposed.
In the experiments reported here, n is set to 5 throughout.
We can straightforwardly implement this recur—sive computation through dynamic programming , by managing a table of size |X | x To fill a cell of this table, formula (4) suggests referring to t x |£| cells and calculating the description length of the rest of the text for O( |X | —t) cells for each language.
Problem Formulation
Nevertheless, since we use a uniform amount of training data for every language, and since varying 7 would prevent us from improving the efficiency of dynamic programming , as explained in §4, in this article we set 7 to a constant obtained empirically.
Segmentation by Dynamic Programming
By applying the above methods, we propose a solution to formula (1) through dynamic programming .
dynamic programming is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Almeida, Miguel and Martins, Andre
Compressive Summarization
8 efficiently with dynamic programming (using the Viterbi algorithm for trees); the total cost is linear in Ln.
Compressive Summarization
8; this can be done in 0(Ln) time with dynamic programming , as discussed in §3.2.
Compressive Summarization
This can be computed exactly in time 0(B 25:1 Lnk ) , through dynamic programming .
Introduction
For example, such solvers are unable to take advantage of efficient dynamic programming routines for sentence compression (McDonald, 2006).
MultiTask Learning
9 can be used for the maximization above: for tasks #1—#2, we solve a relaxation by running AD3 without rounding, and for task #3 we use dynamic programming ; see Table 1.
dynamic programming is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Quirk, Chris
Evaluation
The first line is a baseline HMM using exact posterior computation and inference with the standard dynamic programming algorithms.
HMM alignment
For the standard HMM, there is a dynamic programming algorithm to compute the posterior probability over word alignments Pr(a|e, f These are the sufficient statistics gathered in the E step of EM.
HMM alignment
The structure of the fertility model violates the Markov assumptions used in this dynamic programming method.
HMM alignment
Rather than maximizing each row totally independently, we keep track of the best configurations for each number of words generated in each row, and then pick the best combination that sums to J: another straightforward exercise in dynamic programming .
dynamic programming is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Quan, Xiaojun and Kit, Chunyu and Song, Yan
Introduction
Consequently the task of sentence alignment becomes handily solvable by means of such basic techniques as dynamic programming .
Methodology 2.1 The Problem
Note that it is relatively straightforward to identify the type of many-to-many alignment in monotonic alignment using techniques such as dynamic programming if there is no scrambled pairing or the scrambled pairings are local, limited to a short distance.
Methodology 2.1 The Problem
Both use dynamic programming to search for the best alignment.
Methodology 2.1 The Problem
(2007), a generative model is proposed, accompanied by two specific alignment strategies, i.e., dynamic programming and divisive clustering.
dynamic programming is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Ling, Wang and Xiang, Guang and Dyer, Chris and Black, Alan and Trancoso, Isabel
Parallel Segment Retrieval
We shall describe our model to solve the first problem in 3.1 and our dynamic programming approach to make the inference tractable in 3.2.
Parallel Segment Retrieval
We will show that dynamic programing can be used to make this problem tractable, using Model 1.
Parallel Segment Retrieval
an iterative approach to compute the Viterbi word alignments for IBM Model 1 using dynamic programming .
dynamic programming is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Lassalle, Emmanuel and Denis, Pascal
Conclusion and perspectives
We employed dynamic programming on hierarchies of indicators to compute the feature space providing the best pairwise classifications efficiently.
Hierarchizing feature spaces
Now selecting the best space for one of these measures can be achieved by using dynamic programming techniques.
Hierarchizing feature spaces
We use a dynamic programming technique to compute the best hierarchy by cutting this tree and only keeping classifiers situated at the leaf.
Introduction
hierarchies with dynamic programming .
dynamic programming is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Joty, Shafiq and Carenini, Giuseppe and Ng, Raymond and Mehdad, Yashar
Document-level Parsing Approaches
We pick the subtree which has the higher probability in the two dynamic programming tables.
Document-level Parsing Approaches
If the sentence has the same number of sub-trees in both DTp and DTn, we pick the one with higher probability in the dynamic programming tables.
Parsing Models and Parsing Algorithm
Following (Joty et al., 2012), we implement a probabilistic CKY—like bottom-up algorithm for computing the most likely parse using dynamic programming .
Parsing Models and Parsing Algorithm
Specifically, with n discourse units, we use the upper-triangular portion of the n><n dynamic programming table D. Given U$(0) and U$(1) are the start and end EDU Ids of unit U95:
dynamic programming is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Tamura, Akihiro and Watanabe, Taro and Sumita, Eiichiro and Takamura, Hiroya and Okumura, Manabu
Bilingual Infinite Tree Model
Beam sampling limits the number of possible state transitions for each node to a finite number using slice sampling (Neal, 2003), and then efficiently samples whole hidden state transitions using dynamic programming .
Bilingual Infinite Tree Model
We can parallelize procedures in sampling u and z because the slice sampling for u and the dynamic programing for z are independent for each sentence.
Bilingual Infinite Tree Model
,T) using dynamic programming as follows: In the joint model, p(zt|:c0(t), no.0») oc
Introduction
Inference is efficiently carried out by beam sampling (Gael et al., 2008), which combines slice sampling and dynamic programming .
dynamic programming is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Crescenzi, Pierluigi and Gildea, Daniel and Marino, Andrea and Rossi, Gianluca and Satta, Giorgio
Introduction
General algorithms for parsing LCFRSs build a dynamic programming chart of recognized nonterminals bottom-up, in a manner analogous to the CKY algorithm for CFGs (Hopcroft and Ullman, 1979), but with time and space complexity that are dependent on the rank and fanout of the grammar rules.
Introduction
Whenever it is possible, binarization of LCFRS rules, or reduction of rank to two, is therefore important for parsing, as it reduces the time complexity needed for dynamic programming .
LCFRSs and parsing complexity
Existing parsing algorithms for LCFRSs exploit dynamic programming .
LCFRSs and parsing complexity
AS an example, in the case of parsing based on CFGS, nonterminals as well as partial parses all have fanout one, resulting in the standard time complexity of O(|w|3) of dynamic programming methods.
dynamic programming is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Nishikawa, Hitoshi and Hasegawa, Takaaki and Matsuo, Yoshihiro and Kikui, Genichiro
Conclusion
The preferred sequence is determined by using dynamic programming and beam search.
Introduction
Our algorithm efficiently searches for the best sequence of sentences by using dynamic programming and beam search.
Optimizing Sentence Sequence
To alleviate this, we find an approximate solution by adopting the dynamic programming technique of the Held and Karp Algorithm (Held and Karp, 1962) and beam search.
Optimizing Sentence Sequence
In the search procedure, our dynamic programming based algorithm retains just the hypothesis with maximum score among the hypotheses that have the same sentences and the same last sentence.
dynamic programming is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Flanigan, Jeffrey and Thomson, Sam and Carbonell, Jaime and Dyer, Chris and Smith, Noah A.
Concept Identification
Our approach finds the highest-scoring b and c using a dynamic programming algorithm: the zeroth-order case of inference under a semi-Markov model (Janssen and Limnios, 1999).
Introduction
The approach can be understood as an alternative to parsing approaches using graph transducers such as (synchronous) hyperedge replacement grammars (Chiang et al., 2013; Jones et al., 2012; Drewes et al., 1997), in much the same way that spanning tree algorithms are an alternative to using shift-reduce and dynamic programming algorithms for dependency parsing.1 While a detailed
Related Work
chart-based dynamic programming for inference.
dynamic programming is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Finkel, Jenny Rose and Kleeman, Alex and Manning, Christopher D.
Abstract
While prior feature-based dynamic programming parsers have restricted training and evaluation to artificially short sentences, we present the first general, feature-rich discriminative parser, based on a conditional random field model, which has been successfully scaled to the full WSJ parsing data.
Conclusions
We have presented a new, feature-rich, dynamic programming based discriminative parser which is simpler, more effective, and faster to train and test than previous work, giving us new state-of-the-art performance when training and testing on sentences of length g 15 and the first results for such a parser trained and tested on sentences of length g 40.
Introduction
This paper extends the third thread of work, where joint inference via dynamic programming algorithms is used to train models and to attempt to find the globally best parse.
dynamic programming is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Wang, Xiaolin and Utiyama, Masao and Finch, Andrew and Sumita, Eiichiro
Methods
P(.7-",f/|.7:, M) is the marginal probability of all the possible F E .7: that contain .735, as a word, which can be calculated efficiently through dynamic programming (the process is similar to the foreward-backward algorithm in training a hidden Markov model (HMM) (Rabiner, 1989)):
Methods
Then, the previous dynamic programming method can be extended to the bilingual expectation
Methods
where K is the number of characters in .73, and the k-th character is the start of the word fj, since 7' and J are unknown during the computation of dynamic programming .
dynamic programming is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Choi, Jinho D. and McCallum, Andrew
Introduction
Coupled with dynamic programming , transition-based dependency parsing with beam search can be done very efficiently and gives significant improvement to parsing accuracy.
Related work
Huang and Sagae (2010) later applied dynamic programming to this approach and showed improved efficiency.
Selectional branching
which can be done very efficiently when it is coupled with dynamic programming (Zhang and Clark, 2008; Huang and Sagae, 2010; Zhang and Nivre, 2011; Huang et a1., 2012; Bohnet and Nivre, 2012).
dynamic programming is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Tang, Hao and Keshet, Joseph and Livescu, Karen
Discussion
In the figure, phone bigram TF—IDF is labeled p2; phonetic alignment with dynamic programming is labeled DP.
Experiments
The feature selection experiments in Figure 2 shows that the TF—IDF features alone are quite weak, while the dynamic programming alignment features alone are quite good.
Feature functions
Given (13,10), we use dynamic programming to align the surface form 1‘9 with all of the baseforms of w. Following (Riley et al., 1999), we encode a phoneme/phone with a 4-tuple: consonant manner, consonant place, vowel manner, and vowel place.
dynamic programming is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Jiampojamarn, Sittichai and Kondrak, Grzegorz
EM Alignment
The 1-1 alignment problem can be formulated as a dynamic programming problem to find the maximum score of alignment, given a probability table of aligning letter and phoneme as a mapping function.
EM Alignment
The dynamic programming recursion to find the most likely alignment is the following:
Phonetic alignment
It combines a dynamic programming alignment algorithm with an appropriate scoring scheme for computing phonetic similarity on the basis of multivalued features.
dynamic programming is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Li, Zhifei and Eisner, Jason and Khudanpur, Sanjeev
Variational Approximate Decoding
The trick is to parameterize q as a factorized distribution such that the estimation of q* and decoding using q* are both tractable through efficient dynamic programs .
Variational Approximate Decoding
(2008) effectively do take n = 00, by maintaining the whole translation string in the dynamic programming state.
Variational Approximate Decoding
Figure 4: Dynamic programming estimation of q*.
dynamic programming is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Das, Dipanjan and Smith, Noah A.
QG for Paraphrase Modeling
We next describe a dynamic programming solution for calculating p(7't | Gp(7's)).
QG for Paraphrase Modeling
3.3 Dynamic Programming
QG for Paraphrase Modeling
Thus every word generated under G0 aligns to null, and we can simplify the dynamic programming algorithm that scores a tree 7'5 under G0:
dynamic programming is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: