Abstract | In this paper, we study the answer sentence selection problem for question answering . |
Conclusions | First, although we focus on improving TREC-style open-domain question answering in this work, we would like to apply the proposed technology to other QA scenarios, such as community-based QA (CQA). |
Conclusions | Finally, we would like to improve our system for the answer sentence selection task and for question answering in general. |
Experiments | Although we have demonstrated the benefits of leveraging various lexical semantic models to help find the association between words, the problem of question answering is nevertheless far from solved using the word-based approach. |
Experiments | It is hard to believe that a pure word-matching model would be able to solve this type of “inferential question answering” problem. |
Introduction | Open-domain question answering (QA), which fulfills a user’s information need by outputting direct answers to natural language queries, is a challenging but important problem (Etzioni, 2011). |
Related Work | While the task of question answering has a long history dated back to the dawn of artificial intelligence, early systems like STUDENT (Winograd, 1977) and LUNAR (Woods, 1973) are typically designed to demonstrate natural language understanding for a small and specific domain. |
Related Work | The Text REtrieval Conference (TREC) Question Answering Track was arguably the first large-scale evaluation of open-domain question answering (Voorhees and Tice, 2000). |
Related Work | quiz show provides another open-domain question answering setting, in which IBM’s Watson system famously beat the two highest ranked players (Ferrucci, 2012). |
Abstract | We study question answering as a machine learning problem, and induce a function that maps open-domain questions to queries over a database of web extractions. |
Error Analysis | proximately 6% of the questions answered at precision 0.4. |
Introduction | Open-domain question answering (QA) is a longstanding, unsolved problem. |
Introduction | 0 We introduce PARALEX, an end-to-end open-domain question answering system. |
Overview of the Approach | Model The question answering model includes a lexicon and a linear ranking function. |
Overview of the Approach | Evaluation In Section 8, we evaluate our system against various baselines on the end-task of question answering against a large database of facts extracted from the web. |
Related Work | More recently, researchers have created systems that use machine learning techniques to automatically construct question answering systems from data (Zelle and Mooney, 1996; Popescu et al., 2004; Zettlemoyer and Collins, 2005 ; Clarke et al., 2010; Liang et al., 2011). |
Related Work | These systems have the ability to handle questions with complex semantics on small domain-specific databases like GeoQuery (Tang and Mooney, 2001) or subsets of Freebase (Cai and Yates, 2013), but have yet to scale to the task of general, open-domain question answering . |
Background | (2003, 2004) proposed the PRECISE system, which does not require labeled examples and can be directly applied to question answering with a database. |
Background | Figure 1: End-to-end question answering by GUSP for sentence get flight from toronto to san diego stopping in dtw. |
Experiments | The numbers for GUSP-FULL and GUSP++ are end-to-end question answering accuracy, whereas the numbers for ZC07 and FUBL are recall on exact match in logical forms. |
Experiments | Table 2: Comparison of question answering accuracy in ablation experiments. |
Grounded Unsupervised Semantic Parsing | Figure 1 shows an example of end-to-end question answering using GUSP. |
Introduction | We evaluated GUSP on end-to-end question answering using the ATIS dataset for semantic parsing (Zettlemoyer and Collins, 2007). |
Introduction | Despite these challenges, GUSP attains an accuracy of 84% in end-to-end question answering , effectively tying with the state-of-the-art supervised approaches (85% by Zettlemoyer & Collins (2007), 83% by Kwiatkowski et al. |
Abstract | Question answering systems have been developed for many languages, but most resources were created for English, which can be a problem when developing a system in another language such as French. |
Introduction | In question answering (QA), as in most Natural Language Processing domains, English is the best resourced language, in terms of corpora, lexicons, or systems. |
Introduction | While developing a question answering system for French, we were thus limited by the lack of resources for this language. |
Introduction | Section 5 details the related works in Question Answering . |
Problem definition | A Question Answering (QA) system aims at returning a precise answer to a natural language question: if asked ”How large is the Lincoln Memorial?”, a QA system should return the answer ”164 acres” as well as a justifying snippet. |
Related work | Most question answering systems include question classification, which is generally based on supervised learning. |
Abstract | In Community question answering (QA) sites, malicious users may provide deceptive answers to promote their products or services. |
Deceptive Answer Prediction with User Preference Graph | Figure l (a) shows the general process in a question answering |
Deceptive Answer Prediction with User Preference Graph | Based on the two above assumptions, we can extract three user preference relationships (with same preference) from the question answering example in Figure l (a): m N u5, W N U6, ul N ug, as shown in Figurel (b). |
Experiments | Confucius is a community question answering site, developed by Google. |
Proposed Features | 3.2.1 Question Answer Relevance The main characteristic of answer in Community |
Abstract | This paper presents two minimum Bayes risk (MBR) based Answer Re-ranking (MBRAR) approaches for the question answering (QA) task. |
Introduction | This work makes further exploration along this line of research, by applying MBR technique to question answering (QA). |
Introduction | The function of a typical factoid question answering system is to automatically give answers to questions in most case asking about entities, which usually consists of three key components: question understanding, passage retrieval, and answer extraction. |
Abstract | Information Retrieval (IR) and Answer Extraction are often designed as isolated or loosely connected components in Question Answering (QA), with repeated over-engineering on IR, and not necessarily performance gain for QA. |
Experiments | at the three stages of question answering: |
Introduction | The overall performance of a Question Answering system is bounded by its Information Retrieval (IR) front end, resulting in research specifically on Information Retrieval for Question Answering (IR4QA) (Greenwood, 2008; Sakai et al., 2010). |
Abstract | Community question answering (CQA) has become an increasingly popular research topic. |
Experiments | tion consists of four parts: “question title , question description”, “question answers” and “question category”. |
Introduction | With the development of Web 2.0, community question answering (CQA) services like Yahoo! |