Abstract | Conditional Random Fields ( CRFs ) are a widely-used approach for supervised sequence labelling, notably due to their ability to handle large description spaces and to integrate structural dependency between labels. |
Abstract | In this paper, we address the issue of training very large CRFs , containing up to hundreds output labels and several billion features. |
Abstract | Our experiments demonstrate that very large CRFs can be trained efficiently and that very large models are able to improve the accuracy, while delivering compact parameter sets. |
Introduction | Conditional Random Fields ( CRFs ) (Lafferty et al., 2001; Sutton and McCallum, 2006) constitute a widely-used and effective approach for supervised structure learning tasks involving the mapping between complex objects such as strings and trees. |
Introduction | An important property of CRFs is their ability to handle large and redundant feature sets and to integrate structural dependency between output labels. |
Introduction | However, even for simple linear chain CRFs , the complexity of learning and inference Was partly supported by ANR projects CroTaL |
Conclusions | plication of CRFs , which are a major advance of recent years in machine learning. |
Conclusions | A third contribution of our work is a demonstration that current CRF methods can be used straightforwardly for an important application and outperform state-of—the-art commercial and open-source software; we hope that this demonstration accelerates the widespread use of CRFs . |
Introduction | Research on structured learning has been highly successful, with sequence classification as its most important and successful subfield, and with conditional random fields ( CRFs ) as the most influential approach to learning sequence classifiers. |
Introduction | we show that CRFs can achieve extremely good performance on the hyphenation task. |