Conclusion | We have presented a simple model that outperforms the prior state of the art on FrameNet-style frame-semantic parsing, and performs at par with one of the previous-best single-parser systems on PropBank SRL. |
Discussion | For FrameNet, the WSABIE EMBEDDING model we propose strongly outperforms the baselines on all metrics, and sets a new state of the art . |
Discussion | In comparison to prior work on FrameNet, even our baseline models outperform the previous state of the art . |
Experiments | This would be a standard NLP approach for the frame identification problem, but is surprisingly competitive with the state of the art . |
Experiments | (2014) describe the state of the art |
Experiments | While comparing with prior state of the art on the same corpus, we noted that Das et al. |
Introduction | First, we show that for frame identification on the FrameNet corpus (Baker et al., 1998; Fillmore et al., 2003), we outperform the prior state of the art (Das et al., 2014). |
Introduction | Second, we present results on PropBank-style semantic role labeling (Palmer et al., 2005; Meyers et al., 2004; Marquez et al., 2008), that approach strong baselines, and are on par with prior state of the art (Punyakanok et al., 2008). |
Overview | (2010) improved performance, and later set the current state of the art on this task (Das et al., 2014). |
Abstract | We evaluate these models on two cross-lingual document classification tasks, outperforming the prior state of the art . |
Conclusion | Coupled with very simple composition functions, vectors learned with this method outperform the state of the art on the task of cross-lingual document classification. |
Experiments | Our models outperform the prior state of the art , with the BI models performing slightly better than the ADD models. |
Experiments | We compare our embeddings with the SENNA embeddings, which achieve state of the art performance on a number of tasks (Collobert et al., 2011). |
Introduction | First, we show that for cross-lingual document classification on the Reuters RCVIRCV2 corpora (Lewis et al., 2004), we outperform the prior state of the art (Klementiev et al., 2012). |
Related Work | They have received a lot of attention in recent years (Collobert and Weston, 2008; Mnih and Hinton, 2009; Mikolov et al., 2010, inter alia) and have achieved state of the art performance in language modelling. |
Abstract | We present a survey of the state of the art in automatic keyphrase extraction, examining the major sources of errors made by existing systems and discussing the challenges ahead. |
Conclusion and Future Directions | We have presented a survey of the state of the art in automatic keyphrase extraction. |
Corpora | n: A Survey of the State of the Art |
Evaluation | 4.2 The State of the Art |
Introduction | Our goal in this paper is to survey the state of the art in keyphrase extraction, examining the major sources of errors made by existing systems and discussing the challenges ahead. |
Evaluation materials | State of the art performance on this set has been reported by Hassan and Mi-halcea (2011) using a technique that exploits the Wikipedia linking structure and word sense disambiguation techniques. |
Evaluation materials | The current state of the art is reached by Halawi et al. |
Evaluation materials | Current state of the art was reached by the window-based count model of Baroni and Lenci (2010). |
Results | Indeed, the predictive models achieve an impressive overall performance, beating the current state of the art in several cases, and approaching it in many more. |
Abstract | When compared against current state of the art methods, our model yields significantly simpler output that is both grammatical and meaning preserving. |
Experiments | To evaluate performance, we compare our approach with three other state of the art systems using the test set provided by Zhu et al. |
Introduction | When compared against current state of the art methods (Zhu et al., 2010; Woodsend and Lapata, 2011; Wubben et al., 2012), our model yields significantly simpler output that is both grammatical and meaning preserving. |
Evaluation | Table 5 summarizes the performance of our models on the chosen tasks, and compares it to the state of the art reported in previous work, as well as to various strong baselines. |
Evaluation | For anvanl, plf is just below the state of the art , which is based on disambiguating the verb vector in context (Kartsaklis and Sadrzadeh, 2013), and If outperforms the baseline, which consists in using the verb vector only as a proxy to sentence similarity.5 On anvan2, plf outperforms the best model |
Evaluation | 5 We report state of the art from Kartsaklis and Sadrzadeh |