Context-dependent Semantic Parsing for Time Expressions
Lee, Kenton and Artzi, Yoav and Dodge, Jesse and Zettlemoyer, Luke

Article Structure

Abstract

We present an approach for learning context-dependent semantic parsers to identify and interpret time expressions.

Introduction

Time expressions present a number of challenges for language understanding systems.

Formal Overview

Time Expressions We follow the TIMEX3 standard (Pustejovsky et al., 2005) for defining time expressions within documents.

Representing Time

We use simply typed lambda calculus to represent time expressions.

Parsing Time Expressions

We define a three-step derivation to resolve mentions to their TIMEX3 value.

Detection

The detection problem is to take an input document D and output a mention set M 2 {mi | i = 1, .

Resolution

The resolution problem is to, given a document D and a set of mentions M, map each m E M to the correct time expression 6.

Related Work

Semantic parsers map sentences to logical representations of their underlying meaning, e. g., Zelle

Experimental Setup

Data We evaluate performance on the TempEval-3 (Uzzaman et al., 2013) and WikiWars (Mazur and Dale, 2010) datasets.

Results

End-to-end results Figure 4 shows development and test results for TempEval-3.

Conclusion

We presented the first context-dependent semantic parsing system to detect and resolve time expressions.

Topics

CCG

Appears in 19 sentences as: CCG (19)
In Context-dependent Semantic Parsing for Time Expressions
  1. For both tasks, we make use of a hand-engineered Combinatory Categorial Grammar ( CCG ) to construct a set of meaning representations that identify the time being described.
    Page 1, “Introduction”
  2. For the relatively closed-class time expressions, we demonstrate that it is possible to engineer a high quality CCG lexicon.
    Page 1, “Introduction”
  3. We use a log-linear CCG (Steedman, 1996; Clark and Curran, 2007) to rank possible meanings z E Z for each mention m in a document D, as described in Section 4.
    Page 2, “Formal Overview”
  4. First, we use a CCG to generate an initial logical form for the mention.
    Page 3, “Parsing Time Expressions”
  5. Figure l: A CCG parse tree for the mention “one week ago.” The tree includes forward (>) and backward (<) application, as well as two type-shifting operations
    Page 3, “Parsing Time Expressions”
  6. CCG is a linguistically motivated categorial formalism for modeling a wide range of language phenomena (Steedman, 1996; Steedman, 2000).
    Page 3, “Parsing Time Expressions”
  7. A CCG is defined by a lexicon and a set of combina-tors.
    Page 3, “Parsing Time Expressions”
  8. For example, Figure 1 shows a CCG parse tree for the phrase “one week ago.” The parse tree is read top to bottom, starting from assigning categories to words using the lexicon.
    Page 3, “Parsing Time Expressions”
  9. Hand Engineered Lexicon To parse time expressions, we use a CCG lexicon that includes 287 manually designed entries, along with automatically generated entries such as numbers and common formats of dates and times.
    Page 3, “Parsing Time Expressions”
  10. Each context-dependent parse y specifies one operator of each type, which are applied to the logical form constructed by the CCG grammar, to produce the final, context-dependent logical form
    Page 4, “Parsing Time Expressions”
  11. Algorithm The detection algorithm considers all phrases that our CCG grammar A (Section 4) can parse, uses a learned classifier to further filter this set, and finally resolves conflicts between any overlapping predictions.
    Page 5, “Detection”

See all papers in Proc. ACL 2014 that mention CCG.

See all papers in Proc. ACL that mention CCG.

Back to top.

semantic parsing

Appears in 14 sentences as: semantic parser (4) Semantic parsers (1) semantic parsers (3) Semantic Parsing (1) semantic parsing (6)
In Context-dependent Semantic Parsing for Time Expressions
  1. We present an approach for learning context-dependent semantic parsers to identify and interpret time expressions.
    Page 1, “Abstract”
  2. In this paper, we present the first context-dependent semantic parsing approach for learning to identify and interpret time expressions, addressing all three challenges.
    Page 1, “Introduction”
  3. Recently, methods for learning probabilistic semantic parsers have been shown to address such limitations (Angeli et al., 2012; Angeli and Uszkoreit, 2013).
    Page 1, “Introduction”
  4. We propose to use a context-dependent semantic parser for both detection and resolution of time expressions.
    Page 1, “Introduction”
  5. Both detection (Section 5) and resolution (Section 6) rely on the semantic parser to identify likely mentions and resolve them within context.
    Page 2, “Formal Overview”
  6. (2012), who introduced semantic parsing for this task.
    Page 2, “Representing Time”
  7. Semantic parsers map sentences to logical representations of their underlying meaning, e. g., Zelle
    Page 6, “Related Work”
  8. introduced the idea of learning semantic parsers to resolve time expressions (Angeli et al., 2012) and showed that the approach can generalize to multiple languages (Angeli and Uszkoreit, 2013).
    Page 7, “Related Work”
  9. Similarly, Bethard demonstrated that a hand-engineered semantic parser is also effective (Bethard, 2013b).
    Page 7, “Related Work”
  10. However, these approaches did not use the semantic parser for detection and did not model linguistic context during resolution.
    Page 7, “Related Work”
  11. However, we are the first to use a semantic parsing grammar within a mention detection algorithm, thereby avoiding the need to represent the meaning of complete sentences, and the first to develop a context-dependent model for semantic parsing of time expressions.
    Page 7, “Related Work”

See all papers in Proc. ACL 2014 that mention semantic parsing.

See all papers in Proc. ACL that mention semantic parsing.

Back to top.

logical form

Appears in 13 sentences as: logical form (14) Logical Forms (1) logical forms (1)
In Context-dependent Semantic Parsing for Time Expressions
  1. First, we use a CCG to generate an initial logical form for the mention.
    Page 3, “Parsing Time Expressions”
  2. initial logical form , as appropriate for its context.
    Page 3, “Parsing Time Expressions”
  3. Finally, the logical form is resolved to a TIMEX3 value using a deterministic process.
    Page 3, “Parsing Time Expressions”
  4. Parsing concludes with a logical form representing the meaning of the complete mention.
    Page 3, “Parsing Time Expressions”
  5. We consider three types of context operations, each takes as input a logical form 2’, modifies it and returns a new logical form 2.
    Page 4, “Parsing Time Expressions”
  6. Each context-dependent parse y specifies one operator of each type, which are applied to the logical form constructed by the CCG grammar, to produce the final, context-dependent logical form
    Page 4, “Parsing Time Expressions”
  7. For example, consider the mention “the following year”, which is represented using the logical form next(seq(year), refiime).
    Page 4, “Parsing Time Expressions”
  8. will be launched in april”, the mention “april”, and its logical form april, we would like to resolve it to the coming April, and therefore modify it to nearestjorward (april , ref lime).
    Page 5, “Parsing Time Expressions”
  9. 4.3 Resolving Logical Forms
    Page 5, “Parsing Time Expressions”
  10. For a context-dependent parse y, we compute the TIMEX3 value TM(y) from the logical form 2 = LF(y) with a deterministic step that performs a single traversal of 2.
    Page 5, “Parsing Time Expressions”
  11. We use a CKY algorithm to efficiently determine which phrases the CCG grammar can parse and only allow logical forms for which there exists some context in which they would produce a valid time expression, e.g.
    Page 5, “Detection”

See all papers in Proc. ACL 2014 that mention logical form.

See all papers in Proc. ACL that mention logical form.

Back to top.

end-to-end

Appears in 8 sentences as: End-to-end (1) end-to-end (7)
In Context-dependent Semantic Parsing for Time Expressions
  1. Experiments on benchmark datasets show that our approach outperforms previous state-of-the-art systems, with error reductions of 13% to 21% in end-to-end performance.
    Page 1, “Abstract”
  2. On these benchmark datasets, we present new state-of-the-art results, with error reductions of up to 28% for the detection task and 21% for the end-to-end task.
    Page 2, “Introduction”
  3. We compare to the state-of-the-art systems for end-to-end resolution (Strotgen and Gertz, 2013) and resolution given gold mentions (Bethard, 2013b), both of which do not use any machine learning techniques.
    Page 2, “Formal Overview”
  4. For end-to-end performance, value F1 is the primary metric.
    Page 8, “Experimental Setup”
  5. Comparison Systems We compare our system primarily to HeidelTime (Stro'tgen and Gertz, 2013), which is state of the art in the end-to-end task.
    Page 8, “Experimental Setup”
  6. End-to-end results Figure 4 shows development and test results for TempEval-3.
    Page 8, “Results”
  7. Precision vs. Recall Our probabilistic model of time expression resolution allows us to easily tradeoff precision and recall for end-to-end performance by varying the resolution probability threshold.
    Page 9, “Results”
  8. We also manually categorized all resolution errors for end-to-end performance with 10-fold cross validation of the TempEval-3 Dev dataset,
    Page 9, “Results”

See all papers in Proc. ACL 2014 that mention end-to-end.

See all papers in Proc. ACL that mention end-to-end.

Back to top.

meaning representations

Appears in 6 sentences as: meaning representation (1) meaning representations (5)
In Context-dependent Semantic Parsing for Time Expressions
  1. We use a Combinatory Categorial Grammar to construct compositional meaning representations , while considering contextual cues, such as the document creation time and the tense of the governing verb, to compute the final time values.
    Page 1, “Abstract”
  2. For both tasks, we make use of a hand-engineered Combinatory Categorial Grammar (CCG) to construct a set of meaning representations that identify the time being described.
    Page 1, “Introduction”
  3. For example, this grammar maps the phrase “2nd Friday of July” to the meaning representation intersect(nth(2,friday),july), which encodes the set of all such days.
    Page 1, “Introduction”
  4. For both tasks, we define the space of possible compositional meaning representations Z, where each 2 E Z defines a unique time expression 6.
    Page 2, “Formal Overview”
  5. We build on a number of existing algorithmic ideas, including using CCGs to build meaning representations (Zettlemoyer and Collins, 2005; Zettlemoyer and Collins, 2007; Kwiatkowski et al., 2010; Kwiatkowski et al., 2011), building derivations to transform the output of the CCG parser based on context (Zettlemoyer and Collins, 2009), and using weakly supervised parameter updates (Artzi and Zettlemoyer, 2011; Artzi and Zettlemoyer, 2013b).
    Page 7, “Related Work”
  6. Both models used a Combinatory Categorial Grammar (CCG) to construct a set of possible temporal meaning representations .
    Page 9, “Conclusion”

See all papers in Proc. ACL 2014 that mention meaning representations.

See all papers in Proc. ACL that mention meaning representations.

Back to top.

cross validation

Appears in 4 sentences as: cross validation (4)
In Context-dependent Semantic Parsing for Time Expressions
  1. Figure 7: Value precision vs. recall for 10-fold cross validation on TempEval-3 Dev and WikiWars Dev.
    Page 9, “Results”
  2. Figure 7 shows the precision vs. recall of the resolved values from 10-fold cross validation of TempEval-3 Dev and WikiWars Dev.
    Page 9, “Results”
  3. We also manually categorized all resolution errors for end-to-end performance with 10-fold cross validation of the TempEval-3 Dev dataset,
    Page 9, “Results”
  4. Figure 8: Resolution errors from 10-fold cross validation of the TempEval-3 Dev dataset.
    Page 9, “Results”

See all papers in Proc. ACL 2014 that mention cross validation.

See all papers in Proc. ACL that mention cross validation.

Back to top.

parse tree

Appears in 4 sentences as: parse tree (4) parse trees (1)
In Context-dependent Semantic Parsing for Time Expressions
  1. Figure l: A CCG parse tree for the mention “one week ago.” The tree includes forward (>) and backward (<) application, as well as two type-shifting operations
    Page 3, “Parsing Time Expressions”
  2. The lexicon pairs words with categories and the combinators define how to combine categories to create complete parse trees .
    Page 3, “Parsing Time Expressions”
  3. For example, Figure 1 shows a CCG parse tree for the phrase “one week ago.” The parse tree is read top to bottom, starting from assigning categories to words using the lexicon.
    Page 3, “Parsing Time Expressions”
  4. Model Let y be a context-dependent CCG parse, which includes a parse tree TR(y), a set of context operations CNTX(y) applied to the logical form at the root of the tree, a final context-dependent logical form LF(y) and a TIMEX3 value Define gb(m, D, 3/) 6 Rd to be a d-dimensional feature—vector representation and 6 6 Rd to be a parameter vector.
    Page 6, “Resolution”

See all papers in Proc. ACL 2014 that mention parse tree.

See all papers in Proc. ACL that mention parse tree.

Back to top.

rule-based

Appears in 3 sentences as: rule-based (3)
In Context-dependent Semantic Parsing for Time Expressions
  1. While rule-based approaches provide a natural way to express expert knowledge, it is relatively difficult to en-
    Page 1, “Introduction”
  2. general, many different rule-based systems, e.g.
    Page 7, “Related Work”
  3. However, rule-based approaches dominated in resolution; none of the top performers attempted to learn to do resolution.
    Page 7, “Related Work”

See all papers in Proc. ACL 2014 that mention rule-based.

See all papers in Proc. ACL that mention rule-based.

Back to top.