Index of papers in Proc. ACL that mention
  • time complexity
Crescenzi, Pierluigi and Gildea, Daniel and Marino, Andrea and Rossi, Gianluca and Satta, Giorgio
Introduction
Whenever it is possible, binarization of LCFRS rules, or reduction of rank to two, is therefore important for parsing, as it reduces the time complexity needed for dynamic programming.
Introduction
Gildea (2010) presents a related method for bina-rizing rules while keeping the time complexity of parsing as small as possible.
Introduction
Binarization turns out to be possible with no penalty in time complexity, but, again, the factorization algorithm is exponential in the resulting time complexity .
LCFRSs and parsing complexity
It can also be shown that, if a partial parse having fanout f is obtained by means of the combination of two partial parses with fanout f1 and f2, respectively, the resulting time complexity will be O(|w|f+f1+f2) (Seki et al., 1991; Gildea, 2010).
LCFRSs and parsing complexity
AS an example, in the case of parsing based on CFGS, nonterminals as well as partial parses all have fanout one, resulting in the standard time complexity of O(|w|3) of dynamic programming methods.
LCFRSs and parsing complexity
When parsing with TAGS, we have to manipulate objects with fanout two (in the worst case), resulting in time complexity of O(|w|6).
time complexity is mentioned in 13 sentences in this paper.
Topics mentioned in this paper:
Feng, Vanessa Wei and Hirst, Graeme
Abstract
However, their model has a high order of time complexity , and thus cannot be applied in practice.
Abstract
In this work, we develop a much faster model whose time complexity is linear in the number of sentences.
Bottom-up tree-building
2The time complexity will be reduced to 0(M2n2), if we
Introduction
greedy bottom-up strategy, we develop a discourse parser with a time complexity linear in the total number of sentences in the document.
Linear time complexity
Here we analyze the time complexity of each component in our discourse parser, to quantitatively demonstrate the time efficiency of our model.
Linear time complexity
overall time complexity to perform intra-sentential parsing is The reason is the following.
Linear time complexity
The time complexity for performing forward-backward inference on the single chain is 0((mk — i) x M2) = 0(mk — i), where the constant M is the size of the output vocabulary.
Related work
Due to the 0(n3) time complexity , where n is the number
time complexity is mentioned in 15 sentences in this paper.
Topics mentioned in this paper:
Kaji, Nobuhiro and Fujiwara, Yasuhiro and Yoshinaga, Naoki and Kitsuregawa, Masaru
Abstract
Viterbi decoding is, however, prohibitively slow when the label set is large, because its time complexity is quadratic in the number of labels.
Introduction
The time complexity of the Viterbi algorithm is O(NL2) since there are NL nodes in the lattice and it takes 0(L) time to evaluate the score of each node.
Introduction
Staggered decoding has 0(N) best case time complexity and 0(NL2) worst case time complexity .
Introduction
Therefore, it has 0(23221 N4m_1) time complexity if it terminates after the M -th search step.
time complexity is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Nivre, Joakim
Abstract
Adding the swapping operation changes the time complexity for deterministic parsing from linear to quadratic in the worst case, but empirical estimates based on treebank data show that the expected running time is in fact linear for the range of data attested in the corpora.
Background Notions 2.1 Dependency Graphs and Trees
Assuming that the calls 0(0) and 75(0) can both be performed in constant time, the worst-case time complexity of a deterministic parser based on a transition system S is given by an upper bound on the length of transition sequences in S.
Conclusion
As a result, the time complexity of deterministic parsing is 0(n2) in the worst case, which is rare, but 0(n) in the best case, which is common, and experimental results on data from five languages support the conclusion that expected running time is linear in the length of the sentence.
Introduction
There is recent work on algorithms that can cope with important subsets of all non-projective trees in polynomial time (Kuhlmann and Satta, 2009; Gomez-Rodriguez et al., 2009), but the time complexity is at best 0(n6), which can be problematic in practical applications.
Introduction
In Section 3, we first define a minimal transition system and explain how it can be used to perform projective dependency parsing in linear time; we then extend the system with a single transition for swapping the order of words in the input and demonstrate that the extended system can be used to parse unrestricted dependency trees with a time complexity that is quadratic in the worst case but still linear in the best case.
Transitions for Dependency Parsing
As noted in section 2, the worst-case time complexity of a deterministic transition-based parser is given by an upper bound on the length of transition sequences.
Transitions for Dependency Parsing
Let us now consider the time complexity of the extended system Su 2 (0, Tu, cs, Ct) and let us begin by observing that 2n is still a lower bound on the number of transitions required to reach a terminal configuration.
time complexity is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Sagot, Beno^it and Satta, Giorgio
Conclusion
Given the fact that fanout l bundles can be attached to any adjacent bundle in our factorization, we can show that our algorithm also optimizes time complexity for known tabular parsing algorithms for LCFRSs with fanout 2.
Conclusion
As for general LCFRS, it has been shown by Gildea (2010) that rank optimization and time complexity optimization are not equivalent.
Conclusion
Furthermore, all known algorithms for rank or time complexity optimization have an exponential time complexity (Gomez-Rodriguez et al., 2009).
Time complexity analysis
In this section we sketch the proof of this result, which will prove the quadratic time complexity of our algorithm.
Time complexity analysis
Our induction hypothesis is that for m < n, the time complexity is in (9(m2).
time complexity is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Zhou, Guangyou and Liu, Fang and Liu, Yang and He, Shizhu and Zhao, Jun
Our Approach
2.4 Time Complexity Analysis
Our Approach
In this subsection, we discuss the time complexity of our proposed method.
Our Approach
Similarly, the time complexity of optimization V,- using Algorithm 3 is 0(MpK2 + MpNK).
time complexity is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Rothe, Sascha and Schütze, Hinrich
Comparison to SimRank
8) have time complexity (9(n3) or — if we want to take the higher efficiency of computation for sparse graphs into account —(9(dn2) where n is the number of nodes and d the
Comparison to SimRank
If d < k, then the time complexity of CoSimRank is (9(k2n).
Comparison to SimRank
Thus, we have reduced SimRank’s cubic time complexity to a quadratic time complexity for CoSimRank or — assuming that the average degree d does not depend on n — SimRank’s quadratic time complexity to linear time complexity for the case of computing few similarities.
Introduction
Unfortunately, SimRank has time complexity (9(n3) (where n is the number of nodes in the graph) and therefore does not scale to the large graphs that are typical of NLP.
time complexity is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Li, Sujian and Wang, Liang and Cao, Ziqiang and Li, Wenjie
Add arc <eC,ej> to GC with
Based on dependency structure, we are able to directly analyze the relations between the EDUs without worrying about the additional interior text spans, and apply the existing state-of-the-art dependency parsing techniques which have a relatively low time complexity .
Discourse Dependency Parsing
It is well known that projective dependency parsing can be handled with the Eisner algorithm (1996) which is based on the bottom-up dynamic programming techniques with the time complexity of 0(n3).
Discourse Dependency Parsing
(2005b), we adopt an efficient implementation of the Chu-Liu/Edmonds algorithm that is proposed by Tar-jan (1997) with O(n2) time complexity .
Introduction
Third, to reduce the time complexity of the state-of-the-art constituency based parsing techniques, the approximate parsing approaches are prone to trap in local maximum.
time complexity is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Tan, Ming and Zhou, Wenli and Zheng, Lei and Wang, Shaojun
Abstract
The composite language model has been trained by performing a convergent N -best list approximate EM algorithm that has linear time complexity and a followup EM algorithm to improve word prediction power on corpora with up to a billion tokens and stored on a supercomputer.
Introduction
They derived a generalized inside-outside algorithm to train the composite language model from a general EM (Dempster et al., 1977) by following Je-linek’s ingenious definition of the inside and outside probabilities for SLM (J elinek, 2004) with 6th order of sentence length time complexity .
Introduction
Instead of using the 6th order generalized inside-outside algorithm proposed in (Wang et al., 2006), we train this composite model by a convergent N-best list approximate EM algorithm that has linear time complexity and a followup EM algorithm to improve word prediction power.
time complexity is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Huang, Zhiheng and Chang, Yi and Long, Bo and Crespo, Jean-Francois and Dong, Anlei and Keerthi, Sathiya and Wu, Su-Lin
Background
Therefore, it has 22:0 T4Z time complexity if it terminates at the m-th iteration.
Background
In the best case in which m = 0, the time complexity is T. In the worst case in which m = [log2 L] — 1 is the ceiling function which maps a real number to the smallest following integer), the time complexity is
Introduction
Unfortunately, due to its 0(TL2) time complexity , where T is the input token size and L is the label size, the Viterbi decoding can become prohibitively slow when the label size is large (say, larger than 200).
time complexity is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Cohn, Trevor and Haffari, Gholamreza
Experiments
The time complexity of our inference algorithm is 0(n6), which can be prohibitive for large scale machine translation tasks.
Experiments
The average time complexity for the latter is roughly 0(l4), as plotted in green 75 = 2 X 10—7l4.
Experiments
However, the time complexity is still high, so we set the maximum sentence length to 30 to keep our experiments practicable.
time complexity is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Pighin, Daniele and Cornolti, Marco and Alfonseca, Enrique and Filippova, Katja
Conclusions
We discuss the theoretical time complexity of looking up extraction patterns in a large corpus of
Memory-based pattern extraction
Figure 5: Time complexity of lookup operations for inputs of different sizes.
Memory-based pattern extraction
Time complexity of lookups.
time complexity is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: