Index of papers in Proc. ACL 2008 that mention
  • state of the art
Ganchev, Kuzman and Graça, João V. and Taskar, Ben
Introduction
Section 5 explores how the new alignments lead to consistent and significant improvement in a state of the art phrase base machine translation by using posterior decoding rather than Viterbi decoding.
Phrase-based machine translation
In particular we fix a state of the art machine translation system1 and measure its performance when we vary the supplied word alignments.
Word alignment results
These values are competitive with other state of the art systems (Liang et al., 2006).
state of the art is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Nivre, Joakim and McDonald, Ryan
Abstract
By letting one model generate features for the other, we consistently improve accuracy for both models, resulting in a significant improvement of the state of the art when evaluated on data sets from the CoNLL-X shared task.
Conclusion
Our experimental results show that both models consistently improve their accuracy when given access to features generated by the other model, which leads to a significant advancement of the state of the art in data-driven dependency parsing.
Experiments
Finally, given that the two base models had the previously best performance for these data sets, the guided models achieve a substantial improvement of the state of the art .
state of the art is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Penn, Gerald and Zhu, Xiaodan
Abstract
We assess the current state of the art in speech summarization, by comparing a typical summarizer on two different domains: lecture data and the SWITCHBOARD corpus.
Problem definition and related literature
The purpose of this paper is not so much to introduce a new way of summarizing speech, as to critically reappraise how well the current state of the art really works.
Problem definition and related literature
These four results provide us with valuable insight into the current state of the art in speech summarization: it is not summarization, the aspiration to measure the relative merits of knowledge sources has masked the prominence of some very simple baselines, and the Zechner & Waibel pipe-ASR-output—into-text-summarizer model is still very competitive — what seems to matter more than having access to the raw spoken data is simply knowing that it is spoken data, so that the most relevant, still textually available features can be used.
state of the art is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Wang, Qin Iris and Schuurmans, Dale and Lin, Dekang
Introduction
Supervised learning algorithms still represent the state of the art approach for inferring dependency parsers from data (McDonald et al., 2005a; McDonald and Pereira, 2006; Wang et al., 2007).
Introduction
Consequently, most previous work that has attempted semi-supervised or unsupervised approaches to parsing have not produced results beyond the state of the art supervised results (Klein and Manning, 2002; Klein and Manning, 2004).
Introduction
In recent years, SVMs have demonstrated state of the art results in many supervised learning tasks.
state of the art is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: