Index of papers in Proc. ACL 2014 that mention
  • time complexity
Feng, Vanessa Wei and Hirst, Graeme
Abstract
However, their model has a high order of time complexity , and thus cannot be applied in practice.
Abstract
In this work, we develop a much faster model whose time complexity is linear in the number of sentences.
Bottom-up tree-building
2The time complexity will be reduced to 0(M2n2), if we
Introduction
greedy bottom-up strategy, we develop a discourse parser with a time complexity linear in the total number of sentences in the document.
Linear time complexity
Here we analyze the time complexity of each component in our discourse parser, to quantitatively demonstrate the time efficiency of our model.
Linear time complexity
overall time complexity to perform intra-sentential parsing is The reason is the following.
Linear time complexity
The time complexity for performing forward-backward inference on the single chain is 0((mk — i) x M2) = 0(mk — i), where the constant M is the size of the output vocabulary.
Related work
Due to the 0(n3) time complexity , where n is the number
time complexity is mentioned in 15 sentences in this paper.
Topics mentioned in this paper:
Rothe, Sascha and Schütze, Hinrich
Comparison to SimRank
8) have time complexity (9(n3) or — if we want to take the higher efficiency of computation for sparse graphs into account —(9(dn2) where n is the number of nodes and d the
Comparison to SimRank
If d < k, then the time complexity of CoSimRank is (9(k2n).
Comparison to SimRank
Thus, we have reduced SimRank’s cubic time complexity to a quadratic time complexity for CoSimRank or — assuming that the average degree d does not depend on n — SimRank’s quadratic time complexity to linear time complexity for the case of computing few similarities.
Introduction
Unfortunately, SimRank has time complexity (9(n3) (where n is the number of nodes in the graph) and therefore does not scale to the large graphs that are typical of NLP.
time complexity is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Li, Sujian and Wang, Liang and Cao, Ziqiang and Li, Wenjie
Add arc <eC,ej> to GC with
Based on dependency structure, we are able to directly analyze the relations between the EDUs without worrying about the additional interior text spans, and apply the existing state-of-the-art dependency parsing techniques which have a relatively low time complexity .
Discourse Dependency Parsing
It is well known that projective dependency parsing can be handled with the Eisner algorithm (1996) which is based on the bottom-up dynamic programming techniques with the time complexity of 0(n3).
Discourse Dependency Parsing
(2005b), we adopt an efficient implementation of the Chu-Liu/Edmonds algorithm that is proposed by Tar-jan (1997) with O(n2) time complexity .
Introduction
Third, to reduce the time complexity of the state-of-the-art constituency based parsing techniques, the approximate parsing approaches are prone to trap in local maximum.
time complexity is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Pighin, Daniele and Cornolti, Marco and Alfonseca, Enrique and Filippova, Katja
Conclusions
We discuss the theoretical time complexity of looking up extraction patterns in a large corpus of
Memory-based pattern extraction
Figure 5: Time complexity of lookup operations for inputs of different sizes.
Memory-based pattern extraction
Time complexity of lookups.
time complexity is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: