Linguistic Considerations in Automatic Question Generation
Mazidi, Karen and Nielsen, Rodney D.

Article Structure

Abstract

As students read expository text, comprehension is improved by pausing to answer questions that reinforce the material.

Introduction

Studies of student learning show that answering questions increases depth of student learning, facilitates transfer learning, and improves students’ retention of material (McDaniel et al., 2007; Carpenter, 2012; Roediger and Pyc, 2012).

Related Work

Approaches to automatic question generation from text span nearly four decades.

Approach

The system consists of a straightforward pipeline.

Results

This paper focuses on evaluating generated questions primarily in terms of their linguistic quality, as did Heilman and Smith (2010a).

Linguistic Challenges

Natural language generation faces many linguistic challenges.

Conclusions

Roediger and Pyc (2012) advocate assisting students in building a strong knowledge base because creative discoveries are unlikely to occur when students do not have a sound set of facts and principles at their command.

Topics

coreference

Appears in 6 sentences as: Coreference (2) coreference (4)
In Linguistic Considerations in Automatic Question Generation
  1. Coreference resolution, which could help avoid vague question generation, is discussed in Section 5.
    Page 2, “Approach”
  2. Here we briefly describe three challenges: negation detection, coreference resolution, and verb forms.
    Page 4, “Linguistic Challenges”
  3. 5.2 Coreference Resolution
    Page 5, “Linguistic Challenges”
  4. Currently, our system does not use any type of coreference resolution.
    Page 5, “Linguistic Challenges”
  5. Experiments with existing coreference software performed well only for personal pronouns, which occur infrequently in most expository text.
    Page 5, “Linguistic Challenges”
  6. Not having coreference resolution leads to vague questions, some of which can be filtered as discussed previously.
    Page 5, “Linguistic Challenges”

See all papers in Proc. ACL 2014 that mention coreference.

See all papers in Proc. ACL that mention coreference.

Back to top.

semantic role

Appears in 6 sentences as: semantic role (5) semantic roles (1)
In Linguistic Considerations in Automatic Question Generation
  1. In the described system, semantic role labels of source sentences are used in a domain-independent manner to generate both questions and answers related to the source sentence.
    Page 1, “Abstract”
  2. (2013), which used semantic role labeling to identify patterns in the source text from which questions can be generated.
    Page 1, “Related Work”
  3. SENNA provides the tokenizing, pos tagging, syntactic constituency parsing and semantic role labeling used in the system.
    Page 2, “Approach”
  4. SENNA produces separate semantic role labels for each predicate in the sentence.
    Page 2, “Approach”
  5. The most commonly used semantic roles are A0, A1 and A2, as well as the ArgM modifiers.
    Page 2, “Approach”
  6. For example in: Plant roots and bacterial decay use carbon dioxide in the process of respiration, the word use was classified as NN, leaving no predicate and no semantic role labels in this sentence.
    Page 5, “Linguistic Challenges”

See all papers in Proc. ACL 2014 that mention semantic role.

See all papers in Proc. ACL that mention semantic role.

Back to top.

coreference resolution

Appears in 5 sentences as: Coreference Resolution (1) Coreference resolution (1) coreference resolution (3)
In Linguistic Considerations in Automatic Question Generation
  1. Coreference resolution , which could help avoid vague question generation, is discussed in Section 5.
    Page 2, “Approach”
  2. Here we briefly describe three challenges: negation detection, coreference resolution , and verb forms.
    Page 4, “Linguistic Challenges”
  3. 5.2 Coreference Resolution
    Page 5, “Linguistic Challenges”
  4. Currently, our system does not use any type of coreference resolution .
    Page 5, “Linguistic Challenges”
  5. Not having coreference resolution leads to vague questions, some of which can be filtered as discussed previously.
    Page 5, “Linguistic Challenges”

See all papers in Proc. ACL 2014 that mention coreference resolution.

See all papers in Proc. ACL that mention coreference resolution.

Back to top.

role labels

Appears in 5 sentences as: role labeling (2) role labels (3)
In Linguistic Considerations in Automatic Question Generation
  1. In the described system, semantic role labels of source sentences are used in a domain-independent manner to generate both questions and answers related to the source sentence.
    Page 1, “Abstract”
  2. (2013), which used semantic role labeling to identify patterns in the source text from which questions can be generated.
    Page 1, “Related Work”
  3. SENNA provides the tokenizing, pos tagging, syntactic constituency parsing and semantic role labeling used in the system.
    Page 2, “Approach”
  4. SENNA produces separate semantic role labels for each predicate in the sentence.
    Page 2, “Approach”
  5. For example in: Plant roots and bacterial decay use carbon dioxide in the process of respiration, the word use was classified as NN, leaving no predicate and no semantic role labels in this sentence.
    Page 5, “Linguistic Challenges”

See all papers in Proc. ACL 2014 that mention role labels.

See all papers in Proc. ACL that mention role labels.

Back to top.

semantic role labels

Appears in 5 sentences as: semantic role labeling (2) semantic role labels (3)
In Linguistic Considerations in Automatic Question Generation
  1. In the described system, semantic role labels of source sentences are used in a domain-independent manner to generate both questions and answers related to the source sentence.
    Page 1, “Abstract”
  2. (2013), which used semantic role labeling to identify patterns in the source text from which questions can be generated.
    Page 1, “Related Work”
  3. SENNA provides the tokenizing, pos tagging, syntactic constituency parsing and semantic role labeling used in the system.
    Page 2, “Approach”
  4. SENNA produces separate semantic role labels for each predicate in the sentence.
    Page 2, “Approach”
  5. For example in: Plant roots and bacterial decay use carbon dioxide in the process of respiration, the word use was classified as NN, leaving no predicate and no semantic role labels in this sentence.
    Page 5, “Linguistic Challenges”

See all papers in Proc. ACL 2014 that mention semantic role labels.

See all papers in Proc. ACL that mention semantic role labels.

Back to top.

error rate

Appears in 4 sentences as: error rate (6)
In Linguistic Considerations in Automatic Question Generation
  1. Evaluation results show a 44% reduction in the error rate relative to the best prior systems, averaging over all metrics, and up to 61% reduction in the error rate on grammaticality judgments.
    Page 1, “Abstract”
  2. As seen in Table 4, our results represent a 44% reduction in the error rate relative to Heilman and Smith on the average rating over all metrics, and as high as 61% reduction in the error rate on grammaticality judgments.
    Page 4, “Results”
  3. Interestingly, our system again achieved a 44% reduction in the error rate when averaging over all metrics, just as it did in the Heilman and Smith comparison.
    Page 4, “Results”
  4. Our system achieved a 44% reduction in the error rate relative to both the Heilman and Smith, and the Lindberg et al.
    Page 5, “Conclusions”

See all papers in Proc. ACL 2014 that mention error rate.

See all papers in Proc. ACL that mention error rate.

Back to top.