Abstract | The task has proven useful in many research areas including ontology learning, relation extraction and question answering . |
Introduction | Definitions are also harvested in Question Answering to deal with “what is” questions (Cui et al., 2007; Saggion, 2004). |
Related Work | (2007) propose the use of probabilistic lexico-semantic patterns, called soft patterns, for definitional question answering in the TREC contestl. |
Related Work | Thanks to its generalization power, this method is the most closely related to our work, however the task of definitional question answering to which it is applied is slightly different from that of definition extraction, so a direct performance comparison is not possi- |
Background 2.1 Ontology Learning | parser extracts knowledge from input text and converts them into logical form (the semantic parse), which can then be used in logical and probabilistic inference and support end tasks such as question answering . |
Experiments | Table 1: Comparison of question answering results on the GENIA dataset. |
Experiments | To use DIRT in question answering , it was queried to obtain similar paths for the relation of the question, which were then used to match sentences. |
Introduction | Finally, experiments on a biomedical knowledge acquisition and question answering task show that OntoUSP can greatly outperform USP and previous systems. |
Conclusion | Our experiments on generating surveys for Question Answering and Dependency Parsing show how surveys generated using such context information along with citation sentences have higher quality than those built using citations alone. |
Data | C 0 Lin and Pantel (2001) extract inference rules, which are related to paraphrases (for example, X wrote Y implies X is the author of Y), to improve question answering . |
Impact on Survey Generation | that contains two sets of cited papers and corresponding citing sentences, one on Question Answering (QA) with 10 papers and the other on Dependency Parsing (DP) with 16 papers. |
Introduction | selves to solve tasks requiring more complex reasoning and synthesis of information; many other tasks must be solved to achieve human-like performance on tasks such as Question Answering . |
Introduction | Techniques developed for RTE have now been successfully applied in the domains of Question Answering (Harabagiu and Hickl, 2006) and Machine Translation (Pado et al., 2009), (Mirkin et al., 2009). |
Introduction | The RTE task has been designed specifically to exercise textual inference capabilities, in a format that would make RTE systems potentially useful components in other “deep” NLP tasks such as Question Answering and Machine Translation. |
Abstract | This paper presents a framework for automatically processing information coming from community Question Answering (cQA) portals with the purpose of generating a trustful, complete, relevant and succinct summary in response to a question. |
Introduction | Community Question Answering (cQA) portals are an example of Social Media where the information need of a user is expressed in the form of a question for which a best answer is picked among the ones generated by other users. |
Related Work | Our approach differs in two fundamental aspects: it took in consideration the peculiarities of the data in input by exploiting the nature of UGC and available metadata; additionally, along with relevance, we addressed challenges that are specific to Question Answering , such as Coverage and Novelty. |