Abstract | We address the problem of aligning multiple medical event sequences, corresponding to different clinical narratives, comparing the following approaches: (1) A novel weighted finite state transducer representation of medical event sequences that enables composition and search for decoding, and (2) Dynamic programming with iterative pairwise alignment of multiple sequences using global and local alignment algorithms. |
Abstract | We present results using both approaches and observe that the finite state transducer approach performs performs significantly better than the dynamic programming one by 6.8% for the problem of multiple-sequence alignment. |
Introduction | As a contrast, we adapt dynamic programming algorithms (Needleman et al., 1970, Smith and Waterman, 1981) used to produce global and local alignments for aligning sequences of medical events across narratives. |
Introduction | dynamic programming or other ILP-based methods proposed in literature. |
Problem Description | We propose a novel WFST—based representation that enables accurate decoding for MSA when compared to popularly used dynamic programming algorithms (Needleman et al., 1970, Smith and Waterman, 1981) or other state of the art methods (Do et al., 2012). |
Problem Description | These scores are used in both the WFST—based representation and decoding, as well as for dynamic programming . |
Problem Description | We also use popular dynamic programming algorithms (Needleman et al., 1970, Smith and Waterman, 1981) for sequence alignment of medical events across narratives and compare it to the WFST-based representation and decoding. |
Related Work | Dynamic programming algorithms have been popularly leveraged to produce pairwise and global genetic alignments, where edit distance based metrics are used to compute the cost of insertions, deletions and substitutions. |
Related Work | We use dynamic programming to compute the best alignment, given the temporal and coreference information between medical events across these sequences. |
Related Work | We demonstrate that the WFST—based approach outperforms popularly used dynamic programming algorithms for multiple sequence alignment. |
Abstract | While dynamic programming is viable for bigram-based sentence compression, finding optimal compressed trees within graphs is NP-hard. |
Experiments | 0 DP: The bigram-based dynamic program of McDonald (2006) described in §2.3.9 |
Introduction | However, while the former problem can be solved efficiently using the dynamic programming approach of McDonald (2006), there are no efficient algorithms to recover maximum weighted non-projective subtrees in a general directed graph. |
Multi-Structure Sentence Compression | Following this, §2.3 discusses a dynamic program to find maximum weight bigram subsequences from the input sentence, while §2.4 covers LP relaxation-based approaches for approximating solutions to the problem of finding a maximum-weight subtree in a graph of potential output dependencies. |
Multi-Structure Sentence Compression | McDonald (2006) provides a Viterbi-like dynamic programming algorithm to recover the highest-scoring sequence of order-preserving bigrams from a lattice, either in unconstrained form or with a specific length constraint. |
Multi-Structure Sentence Compression | The latter requires a dynamic programming table [T] which represents the best score for a compression of length 7“ ending at token i. |
Concept Identification | Our approach finds the highest-scoring b and c using a dynamic programming algorithm: the zeroth-order case of inference under a semi-Markov model (Janssen and Limnios, 1999). |
Introduction | The approach can be understood as an alternative to parsing approaches using graph transducers such as (synchronous) hyperedge replacement grammars (Chiang et al., 2013; Jones et al., 2012; Drewes et al., 1997), in much the same way that spanning tree algorithms are an alternative to using shift-reduce and dynamic programming algorithms for dependency parsing.1 While a detailed |
Related Work | chart-based dynamic programming for inference. |
Methods | P(.7-",f/|.7:, M) is the marginal probability of all the possible F E .7: that contain .735, as a word, which can be calculated efficiently through dynamic programming (the process is similar to the foreward-backward algorithm in training a hidden Markov model (HMM) (Rabiner, 1989)): |
Methods | Then, the previous dynamic programming method can be extended to the bilingual expectation |
Methods | where K is the number of characters in .73, and the k-th character is the start of the word fj, since 7' and J are unknown during the computation of dynamic programming . |