Decoding to Maximize BLEU | Our algorithm is another dynamic programming decoding pass on the trigram forest, and is similar to the parsing algorithm for maximizing expected labelled recall presented by Goodman (1996). |
Decoding to Maximize BLEU | The expression can be factorized and computed using dynamic programming on the forest. |
Experiments | We can also use the dynamic programming hook trick. |
Language Model Integrated Decoding for SCFG | From a dynamic programming point of view, the DP states are X [i, j], where X ranges over all possible nonterminals and i and j range over 0 to the input string length Each state stores the best translations obtainable. |
Language Model Integrated Decoding for SCFG | Thus, to preserve the dynamic programming property, we need to refine the states by adding the boundary words into the parameterization. |
Multi-pass LM-Integrated Decoding | (2005) factor-izes the dynamic programming steps and lowers the asymptotic complexity of the n-gram integrated decoding, but has not been implemented in large-scale systems where massive pruning is present. |
Multi-pass LM-Integrated Decoding | (2007) also take a two-pass decoding approach, with the first pass leaving the language model boundary words out of the dynamic programming state, such that only one hypothesis is retained for each span and grammar symbol. |
Conclusion | We also devised a dynamic programming algorithm for forest oracles, an interesting problem by itself. |
Forest Reranking | We will present a dynamic programming algorithm for this problem in Sec. |
Forest Reranking | Analogous to the language model cost in forest rescoring, the unit feature cost here is a non-monotonic score in the dynamic programming backbone, and the derivations may thus be extracted oat-of-order. |
Introduction | Alternatively, discriminative parsing is tractable with exact and efficient search based on dynamic programming (DP) if all features are restricted to be local, that is, only looking at a local window within the factored search space (Taskar et al., 2004; McDonald et al., 2005). |
Supporting Forest Algorithms | We instead propose a dynamic programming algorithm which optimizes the number of matched brackets for a given number of test brackets. |
Introduction | We restrict the target side to the so called well-formed dependency structures, in order to cover a large set of non-constituent transfer rules (Marcu et al., 2006), and enable efficient decoding through dynamic programming . |
Introduction | Dynamic Programming |
Introduction | tation and make it hard to employ shared structures for efficient dynamic programming . |
Abstract | While prior feature-based dynamic programming parsers have restricted training and evaluation to artificially short sentences, we present the first general, feature-rich discriminative parser, based on a conditional random field model, which has been successfully scaled to the full WSJ parsing data. |
Conclusions | We have presented a new, feature-rich, dynamic programming based discriminative parser which is simpler, more effective, and faster to train and test than previous work, giving us new state-of-the-art performance when training and testing on sentences of length g 15 and the first results for such a parser trained and tested on sentences of length g 40. |
Introduction | This paper extends the third thread of work, where joint inference via dynamic programming algorithms is used to train models and to attempt to find the globally best parse. |