Index of papers in Proc. ACL that mention
  • question answering
Qu, Zhonghua and Liu, Yang
Abstract
Online forums are becoming a popular resource in the state of the art question answering (QA) systems.
Abstract
Most prior work focused on extracting only question answering sentences from user conversations.
Introduction
Automatic Question Answering (QA) systems rely heavily on good sources of data that contain questions and answers.
Introduction
Question answering forums, such as technical support forums, are places where users find answers through conversations.
Introduction
Because of their nature as online communities, question answering forums provide more updated answers to new problems.
Related Work
This knowledge source could substantially help automatic question answering systems.
Related Work
An approach using email structure to detect and summarize question answer pairs was introduced in (Shrestha and Mck-eown, 2004).
Related Work
In this paper, in order to provide a better foundation for question answer detection in online forums, we investigate tagging sentences with a much richer set of categories, as well as identifying their dependency relationships.
question answering is mentioned in 16 sentences in this paper.
Topics mentioned in this paper:
Yih, Wen-tau and Chang, Ming-Wei and Meek, Christopher and Pastusiak, Andrzej
Abstract
In this paper, we study the answer sentence selection problem for question answering .
Conclusions
First, although we focus on improving TREC-style open-domain question answering in this work, we would like to apply the proposed technology to other QA scenarios, such as community-based QA (CQA).
Conclusions
Finally, we would like to improve our system for the answer sentence selection task and for question answering in general.
Experiments
Although we have demonstrated the benefits of leveraging various lexical semantic models to help find the association between words, the problem of question answering is nevertheless far from solved using the word-based approach.
Experiments
It is hard to believe that a pure word-matching model would be able to solve this type of “inferential question answering” problem.
Introduction
Open-domain question answering (QA), which fulfills a user’s information need by outputting direct answers to natural language queries, is a challenging but important problem (Etzioni, 2011).
Related Work
While the task of question answering has a long history dated back to the dawn of artificial intelligence, early systems like STUDENT (Winograd, 1977) and LUNAR (Woods, 1973) are typically designed to demonstrate natural language understanding for a small and specific domain.
Related Work
The Text REtrieval Conference (TREC) Question Answering Track was arguably the first large-scale evaluation of open-domain question answering (Voorhees and Tice, 2000).
Related Work
quiz show provides another open-domain question answering setting, in which IBM’s Watson system famously beat the two highest ranked players (Ferrucci, 2012).
question answering is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Kothari, Govind and Negi, Sumit and Faruquie, Tanveer A. and Chakaravarthy, Venkatesan T. and Subramaniam, L. Venkata
Abstract
This has resulted in the growth of SMS based Question Answering (QA) services.
Abstract
In this work we present an automatic FAQ-based question answering system for SMS users.
Introduction
Most of these contact center based services and other regular services like “AQA 63336”1 by Issuebits Ltd, GTIP2 by AlienPant Ltd., “Tex-perts”3 by Number UK Ltd. and “ChaCha”4 use human agents to understand the SMS text and re-spond.U)these ShdS quefies.’The naune oftex-ting language, which often as a rule rather than exception, has misspellings, nonstandard abbreviations, transliterations, phonetic substitutions and omissions, makes it difficult to build automated question answering systems around SMS technology.
Introduction
Unlike other automatic question answering systems that focus on generating or searching answers, in a FAQ database the question and answers are already provided by an expert.
Introduction
In this paper we present a FAQ-based question answering system over a SMS interface.
Prior Work
The information retrieval based system treat question answering as an information retrieval problem.
Prior Work
In FAQ based question answering , where FAQ provide a ready made database of question-answer, the main task is to find the closest matching question to retrieve the relevant answer (Sneiders, 1999) (Song et al., 2007).
Prior Work
We address the challenges in building a FAQ-based question answering system over a SMS interface.
question answering is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Fader, Anthony and Zettlemoyer, Luke and Etzioni, Oren
Abstract
We study question answering as a machine learning problem, and induce a function that maps open-domain questions to queries over a database of web extractions.
Error Analysis
proximately 6% of the questions answered at precision 0.4.
Introduction
Open-domain question answering (QA) is a longstanding, unsolved problem.
Introduction
0 We introduce PARALEX, an end-to-end open-domain question answering system.
Overview of the Approach
Model The question answering model includes a lexicon and a linear ranking function.
Overview of the Approach
Evaluation In Section 8, we evaluate our system against various baselines on the end-task of question answering against a large database of facts extracted from the web.
Related Work
More recently, researchers have created systems that use machine learning techniques to automatically construct question answering systems from data (Zelle and Mooney, 1996; Popescu et al., 2004; Zettlemoyer and Collins, 2005 ; Clarke et al., 2010; Liang et al., 2011).
Related Work
These systems have the ability to handle questions with complex semantics on small domain-specific databases like GeoQuery (Tang and Mooney, 2001) or subsets of Freebase (Cai and Yates, 2013), but have yet to scale to the task of general, open-domain question answering .
question answering is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Poon, Hoifung
Background
(2003, 2004) proposed the PRECISE system, which does not require labeled examples and can be directly applied to question answering with a database.
Background
Figure 1: End-to-end question answering by GUSP for sentence get flight from toronto to san diego stopping in dtw.
Experiments
The numbers for GUSP-FULL and GUSP++ are end-to-end question answering accuracy, whereas the numbers for ZC07 and FUBL are recall on exact match in logical forms.
Experiments
Table 2: Comparison of question answering accuracy in ablation experiments.
Grounded Unsupervised Semantic Parsing
Figure 1 shows an example of end-to-end question answering using GUSP.
Introduction
We evaluated GUSP on end-to-end question answering using the ATIS dataset for semantic parsing (Zettlemoyer and Collins, 2007).
Introduction
Despite these challenges, GUSP attains an accuracy of 84% in end-to-end question answering , effectively tying with the state-of-the-art supervised approaches (85% by Zettlemoyer & Collins (2007), 83% by Kwiatkowski et al.
question answering is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Bao, Junwei and Duan, Nan and Zhou, Ming and Zhao, Tiejun
Abstract
A typical knowledge-based question answering (KB-QA) system faces two challenges: one is to transform natural language questions into their meaning representations (MRs); the other is to retrieve answers from knowledge bases (KBs) using generated MRs.
Introduction
Knowledge-based question answering (KB-QA) computes answers to natural language (NL) questions based on existing knowledge bases (KBs).
Introduction
Unlike existing KB-QA systems which treat semantic parsing and answer retrieval as two cascaded tasks, this paper presents a unified framework that can integrate semantic parsing into the question answering procedure directly.
Introduction
Our work intersects with two research directions: semantic parsing and question answering .
question answering is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Ligozat, Anne-Laure
Abstract
Question answering systems have been developed for many languages, but most resources were created for English, which can be a problem when developing a system in another language such as French.
Introduction
In question answering (QA), as in most Natural Language Processing domains, English is the best resourced language, in terms of corpora, lexicons, or systems.
Introduction
While developing a question answering system for French, we were thus limited by the lack of resources for this language.
Introduction
Section 5 details the related works in Question Answering .
Problem definition
A Question Answering (QA) system aims at returning a precise answer to a natural language question: if asked ”How large is the Lincoln Memorial?”, a QA system should return the answer ”164 acres” as well as a justifying snippet.
Related work
Most question answering systems include question classification, which is generally based on supervised learning.
question answering is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Berant, Jonathan and Liang, Percy
Discussion
While it has been shown that paraphrasing methods are useful for question answering (Harabagiu and Hickl, 2006) and relation extraction (Romano et al., 2006), this is, to the best of our knowledge, the first paper to perform semantic parsing through paraphrasing.
Discussion
We believe that our approach is particularly suitable for scenarios such as factoid question answering , where the space of logical forms is somewhat constrained and a few generation rules suffice to reduce the problem to paraphrasing.
Discussion
who presented a paraphrase-driven question answering system.
Introduction
Scaling semantic parsers to large knowledge bases has attracted substantial attention recently (Cai and Yates, 2013; Berant et al., 2013; Kwiatkowski et al., 2013), since it drives applications such as question answering (QA) and information extraction (IE).
Introduction
Our work relates to recent lines of research in semantic parsing and question answering .
question answering is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Li, Fangtao and Gao, Yang and Zhou, Shuchang and Si, Xiance and Dai, Decheng
Abstract
In Community question answering (QA) sites, malicious users may provide deceptive answers to promote their products or services.
Deceptive Answer Prediction with User Preference Graph
Figure l (a) shows the general process in a question answering
Deceptive Answer Prediction with User Preference Graph
Based on the two above assumptions, we can extract three user preference relationships (with same preference) from the question answering example in Figure l (a): m N u5, W N U6, ul N ug, as shown in Figurel (b).
Experiments
Confucius is a community question answering site, developed by Google.
Proposed Features
3.2.1 Question Answer Relevance The main characteristic of answer in Community
question answering is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Duan, Huizhong and Cao, Yunbo and Lin, Chin-Yew and Yu, Yong
Using Translation Probability
(2005) used a Question Answer Database (known as QUAB) to support interactive question answering .
Using Translation Probability
Question answering (e.g., Pasca and Harabagiu, 2001; Echihabi and Marcu, 2003; Voorhees, 2004; Metzler and Croft, 2005) relates to question search.
Using Translation Probability
Question answering automatically extracts short answers for a relatively limited class of question types from document collections.
question answering is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Navigli, Roberto and Velardi, Paola
Abstract
The task has proven useful in many research areas including ontology learning, relation extraction and question answering .
Introduction
Definitions are also harvested in Question Answering to deal with “what is” questions (Cui et al., 2007; Saggion, 2004).
Related Work
(2007) propose the use of probabilistic lexico-semantic patterns, called soft patterns, for definitional question answering in the TREC contestl.
Related Work
Thanks to its generalization power, this method is the most closely related to our work, however the task of definitional question answering to which it is applied is slightly different from that of definition extraction, so a direct performance comparison is not possi-
question answering is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Poon, Hoifung and Domingos, Pedro
Background 2.1 Ontology Learning
parser extracts knowledge from input text and converts them into logical form (the semantic parse), which can then be used in logical and probabilistic inference and support end tasks such as question answering .
Experiments
Table 1: Comparison of question answering results on the GENIA dataset.
Experiments
To use DIRT in question answering , it was queried to obtain similar paths for the relation of the question, which were then used to match sentences.
Introduction
Finally, experiments on a biomedical knowledge acquisition and question answering task show that OntoUSP can greatly outperform USP and previous systems.
question answering is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Zhang, Jiajun and Liu, Shujie and Li, Mu and Zhou, Ming and Zong, Chengqing
Discussions
Besides SMT, the semantic phrase embeddings can be used in other cross-lingual tasks, such as cross-lingual question answering , since the semantic similarity between phrases in different languages can be calculated accurately.
Discussions
monolingual NLP tasks which depend on good phrase representations or semantic similarity between phrases, such as named entity recognition, parsing, textual entailment, question answering and paraphrase detection.
Introduction
cross-lingual question answering) and monolingual applications such as textual entailment, question answering and paraphrase detection.
question answering is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Kushman, Nate and Artzi, Yoav and Zettlemoyer, Luke and Barzilay, Regina
Introduction
The described state can be modeled with a system of equations whose solution specifies the questions’ answers .
Related Work
Examples include question answering (Clarke et al., 2010; Cai and Yates, 2013a; Cai and Yates, 2013b; Berant et al., 2013; Kwiatkowski et al.,
Related Work
We focus on learning from varied supervision, including question answers and equation systems, both can be obtained reliably from annotators with no linguistic training and only basic math knowledge.
question answering is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Zhou, Guangyou and Liu, Fang and Liu, Yang and He, Shizhu and Zhao, Jun
Abstract
Community question answering (CQA) has become an increasingly popular research topic.
Experiments
tion consists of four parts: “question title , question description”, “question answers” and “question category”.
Introduction
With the development of Web 2.0, community question answering (CQA) services like Yahoo!
question answering is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Yao, Xuchen and Van Durme, Benjamin and Clark, Peter
Abstract
Information Retrieval (IR) and Answer Extraction are often designed as isolated or loosely connected components in Question Answering (QA), with repeated over-engineering on IR, and not necessarily performance gain for QA.
Experiments
at the three stages of question answering:
Introduction
The overall performance of a Question Answering system is bounded by its Information Retrieval (IR) front end, resulting in research specifically on Information Retrieval for Question Answering (IR4QA) (Greenwood, 2008; Sakai et al., 2010).
question answering is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Duan, Nan
Abstract
This paper presents two minimum Bayes risk (MBR) based Answer Re-ranking (MBRAR) approaches for the question answering (QA) task.
Introduction
This work makes further exploration along this line of research, by applying MBR technique to question answering (QA).
Introduction
The function of a typical factoid question answering system is to automatically give answers to questions in most case asking about entities, which usually consists of three key components: question understanding, passage retrieval, and answer extraction.
question answering is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Tomasoni, Mattia and Huang, Minlie
Abstract
This paper presents a framework for automatically processing information coming from community Question Answering (cQA) portals with the purpose of generating a trustful, complete, relevant and succinct summary in response to a question.
Introduction
Community Question Answering (cQA) portals are an example of Social Media where the information need of a user is expressed in the form of a question for which a best answer is picked among the ones generated by other users.
Related Work
Our approach differs in two fundamental aspects: it took in consideration the peculiarities of the data in input by exploiting the nature of UGC and available metadata; additionally, along with relevance, we addressed challenges that are specific to Question Answering , such as Coverage and Novelty.
question answering is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Sammons, Mark and Vydiswaran, V.G.Vinod and Roth, Dan
Introduction
selves to solve tasks requiring more complex reasoning and synthesis of information; many other tasks must be solved to achieve human-like performance on tasks such as Question Answering .
Introduction
Techniques developed for RTE have now been successfully applied in the domains of Question Answering (Harabagiu and Hickl, 2006) and Machine Translation (Pado et al., 2009), (Mirkin et al., 2009).
Introduction
The RTE task has been designed specifically to exercise textual inference capabilities, in a format that would make RTE systems potentially useful components in other “deep” NLP tasks such as Question Answering and Machine Translation.
question answering is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Qazvinian, Vahed and Radev, Dragomir R.
Conclusion
Our experiments on generating surveys for Question Answering and Dependency Parsing show how surveys generated using such context information along with citation sentences have higher quality than those built using citations alone.
Data
C 0 Lin and Pantel (2001) extract inference rules, which are related to paraphrases (for example, X wrote Y implies X is the author of Y), to improve question answering .
Impact on Survey Generation
that contains two sets of cited papers and corresponding citing sentences, one on Question Answering (QA) with 10 papers and the other on Dependency Parsing (DP) with 16 papers.
question answering is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Surdeanu, Mihai and Ciaramita, Massimiliano and Zaragoza, Hugo
Approach
Since our focus is on exploring the usability of the answer content, we do not perform retrieval by finding similar questions already answered (Jeon et al., 2005), i.e., our answer collection C contains only the site’s answers without the corresponding questions answered .
Approach
We compute th< Pointwise Mutual Information (PMI) and Chi squar< (X2) association measures between each question answer word pair in the query-log corpus.
Introduction
The problem of Question Answering (QA) has received considerable attention in the past few years.
question answering is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Kaisser, Michael and Hearst, Marti A. and Lowe, John B.
Conclusions and Future Work
For classifying according to type, as discussed above, most automated query classification for web logs have been based on the topic of the query rather than on the intended result type, but the question answering literature has intensively investigated how to predict appropriate answer types.
Related Work
The candidate answer types are often drawn from the types of questions that have appeared in the TREC Question Answering track (Voorhees, 2003).
Study Goals
These categories include answer types used in question answering research as well as (to better capture the diverse nature of web queries) several more general response types such as Advice and General Information.
question answering is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: