Abstract | We explore the extent to which high-resource manual annotations such as treebanks are necessary for the task of semantic role labeling (SRL). |
Approaches | A typical pipeline consists of a POS tagger, dependency parser, and semantic role labeler. |
Introduction | The goal of semantic role labeling (SRL) is to identify predicates and arguments and label their semantic contribution in a sentence. |
Introduction | The problem of SRL for low-resource languages is an important one to solve, as solutions pave the way for a wide range of applications: Accurate identification of the semantic roles of entities is a critical step for any application sensitive to semantics, from information retrieval to machine translation to question answering. |
Introduction | We examine approaches in a joint setting where we marginalize over latent syntax to find the optimal semantic role assignment; and a pipeline setting where we first induce an unsupervised grammar. |
Related Work | Our work builds upon research in both semantic role labeling and unsupervised grammar induction (Klein and Manning, 2004; Spitkovsky et a1., 2010a). |
Related Work | Previous related approaches to semantic role labeling include joint classification of semantic arguments (Toutanova et a1., 2005; J 0-hansson and Nugues, 2008), latent syntax induction (Boxwell et a1., 2011; Naradowsky et a1., 2012), and feature engineering for SRL (Zhao et a1., 2009; Bjorkelund et a1., 2009). |
Related Work | (2013) extend this idea by coupling predictions of a dependency parser with predictions from a semantic role labeler. |
Abstract | Additionally, we report strong results on PropBank-style semantic role labeling in comparison to prior work. |
Argument Identification | From a frame lexicon, we look up the set of semantic roles Ry that associate with y. |
Argument Identification | 7By overtness, we mean the non-null instantiation of a semantic role in a frame-semantic parse. |
Frame Identification with Embeddings | 5 The frame lexicon stores the frames, corresponding semantic roles and the lexical units associated with the frame. |
Introduction | According to the theory of frame semantics (Fillmore, 1982), a semantic frame represents an event or scenario, and possesses frame elements (or semantic roles ) that participate in the |
Introduction | Most work on frame-semantic parsing has usually divided the task into two major subtasks: frame identification, namely the disambiguation of a given predicate to a frame, and argument identification (or semantic role labeling), the analysis of words and phrases in the sentential context that satisfy the frame’s semantic roles (Das et al., 2010; Das et al., 2014).1 Here, we focus on the first subtask of frame identification for given predicates; we use our novel method (§3) in conjunction with a standard argument identification model (§4) to perform full frame-semantic parsing. |
Introduction | Second, we present results on PropBank-style semantic role labeling (Palmer et al., 2005; Meyers et al., 2004; Marquez et al., 2008), that approach strong baselines, and are on par with prior state of the art (Punyakanok et al., 2008). |
Overview | 2004; Carreras and Marquez, 2005) on PropBank semantic role labeling (SRL), it has been treated as an important NLP problem. |
Overview | PropBank The PropBank project (Palmer et al., 2005) is another popular resource related to semantic role labeling. |
Overview | Like FrameNet, it also has a lexical database that stores type information about verbs, in the form of sense frames and the possible semantic roles each frame could take. |
Introduction | Rather than introducing reordering models on either the word level or the translation phrase level, we propose a unified approach to modeling reordering on the linguistic unit level, e. g., syntactic constituents and semantic roles . |
Introduction | The reordering unit falls into multiple granularities, from single words to more complex constituents and semantic roles , and often crosses translation phrases. |
Introduction | To show the effectiveness of our reordering models, we integrate both syntactic constituent reordering models and semantic role reordering models into a state-of-the-art HPB system (Chiang, 2007; Dyer et al., 2010). |
Unified Linguistic Reordering Models | As mentioned earlier, the linguistic reordering unit is the syntactic constituent for syntactic reordering, and the semantic role for semantic reordering. |
Unified Linguistic Reordering Models | Note that we refer all core arguments, adjuncts, and predicates as semantic roles ; thus we say the PAS in Figure 1 has 4 roles. |
Unified Linguistic Reordering Models | Treating the two forms of reorderings in a unified way, the semantic reordering model is obtainable by regarding a PAS as a CFG rule and considering a semantic role as a constituent. |
Introduction | XMEANT is obtained by (1) using simple lexical translation probabilities, instead of the monolingual context vector model used in MEANT for computing the semantic role fillers similarities, and (2) incorporating bracketing ITG constrains for word alignment within the semantic role fillers. |
Related Work | MEANT (Lo et al., 2012), which is the weighted f-score over the matched semantic role labels of the automatically aligned semantic frames and role fillers, that outperforms BLEU, NIST, METEOR, WER, CDER and TER in correlation with human adequacy judgments. |
Related Work | MEANT is easily portable to other languages, requiring only an automatic semantic parser and a large monolingual corpus in the output language for identifying the semantic structures and the lexical similarity between the semantic role fillers of the reference and translation. |
Related Work | There is a total of 12 weights for the set of semantic role labels in MEANT as defined in Lo and Wu (2011b). |
XMEANT: a cross-lingual MEANT | The weights can also be estimated in unsupervised fashion using the relative frequency of each semantic role label in the foreign input, as in UMEANT. |
XMEANT: a cross-lingual MEANT | To aggregate individual lexical translation probabilities into phrasal similarities between cross-lingual semantic role fillers, we compared two natural approaches to generalizing MEANT’s method of comparing semantic parses, as described below. |
XMEANT: a cross-lingual MEANT | 3.1 Applying MEANT’s f-score within semantic role fillers |
Generating On-the-fly Knowledge | A path is considered as joining two germs in a DCS tree, where a germ is defined as a specific semantic role of a node. |
Generating On-the-fly Knowledge | The abstract denotation of a germ is defined in a top-down manner: for the root node p of a DCS tree ’2', we define its denotation [[p]]7 as the denotation of the entire tree [[7]]; for a non-root node 7' and its parent node a, let the edge (a, 7') be labeled by semantic roles (r, r’), then define |
The Idea | The labels on both ends of an edge, such as SUBJ (subject) and OBJ (object), are considered as semantic roles of the cor- |
The Idea | where read, student and book denote sets represented by these words respectively, and wr represents the set 21) considered as the domain of the semantic role r (e.g. |
The Idea | 1The semantic role ARG is specifically defined for denoting nominal predicate. |
Abstract | Specifically, this model, trained on part-of-speech tags, represents the preferred locations of semantic roles relative to a verb as Gaussian mixtures over real numbers. |
Assumptions | The model presented here learns a single, non-recursive ordering for the semantic roles in each sentence relative to the verb since several studies have suggested that early child grammars may consist of simple linear grammars that are dictated by semantic roles (Diessel and Tomasello, 2001; J ackendoff and Wittenberg, in press). |
Assumptions | how many semantic roles it confers). |
Assumptions | Since infants infer the number of semantic roles , this work further assumes they already have expectations about where these roles tend to be realized in sentences, if they appear. |
Background | This finding suggests both that learners will ignore canonical structure in favor of using all possible arguments and that children have a bias to assign a unique semantic role to each argument. |
Background | BabySRL is a computational model of semantic role acquistion using a similar set of assumptions to the current work. |
Background | to acquire semantic role labelling while still exhibiting 1-1 role bias. |
Introduction | In particular, the model described in this paper takes chunked child-directed speech as input and learns orderings over semantic roles . |
Class Analyses | We believe these analyses may provide a comprehensive characterization of particular semantic roles that can be used for various NLP applications. |
Class Analyses | Since dictionary publishers have not previously devoted much effort in analyzing preposition behavior, we believe PDEP may serve an important role, particularly for various NLP applications in which semantic role labeling is important. |
Class Analyses | We expect that desired improvements will come from usage in various NLP tasks, particularly word-sense disambiguation and semantic role labeling. |
Introduction | (2013); Srikumar and Roth (2011)) have shown the value of prepositional phrases in joint modeling with verbs for semantic role labeling. |
Introduction | Section 5 describes how we can use PDEP for the analysis of semantic role and semantic relation inventories. |
See http://clg.wlv.ac.uk/proiects/DVC | The occurrence of these invalid instances provides an opportunity for improving taggers, parsers, and semantic role labelers. |
Abstract | In the described system, semantic role labels of source sentences are used in a domain-independent manner to generate both questions and answers related to the source sentence. |
Approach | SENNA provides the tokenizing, pos tagging, syntactic constituency parsing and semantic role labeling used in the system. |
Approach | SENNA produces separate semantic role labels for each predicate in the sentence. |
Approach | The most commonly used semantic roles are A0, A1 and A2, as well as the ArgM modifiers. |
Linguistic Challenges | For example in: Plant roots and bacterial decay use carbon dioxide in the process of respiration, the word use was classified as NN, leaving no predicate and no semantic role labels in this sentence. |
Related Work | (2013), which used semantic role labeling to identify patterns in the source text from which questions can be generated. |
Baselines | Negation focus identification in *SEM’2012 shared tasks is restricted to verbal negations annotated with MNEG in PropBank, with only the constituent belonging to a semantic role selected as negation focus. |
Baselines | For comparison, we choose the state-of-the-art system described in Blanco and Moldovan (2011), which employed various kinds of syntactic features and semantic role features, as one of our baselines. |
Baselines | > Semantic features: the syntactic label of semantic role A1; whether A1 contains POS tag DT, JJ, PRP, CD, RB, VB, and WP, as defined in Blanco and Moldovan (2011); whether A1 contains token any, anybody, anymore, anyone, anything, anytime, anywhere, certain, enough, full, many, much, other, some, specifics, too, and until, as defined in Blanco and Moldovan (2011); the syntactic label of the first semantic role in the sentence; the semantic label of the last semantic role in the sentence; the thematic role for AO/Al/AZ/A3/A4 of the negated predicate. |
Introduction | Current studies (e.g., Blanco and Moldovan, 2011; Rosenberg and Bergler, 2012) sort to various kinds of intra-sentence information, such as lexical features, syntactic features, semantic role features and so on, ignoring less-obvious inter-sentence information. |
Conclusion | As applications of the resulting semantic frames and verb classes, we plan to integrate them into syntactic parsing, semantic role labeling and verb sense disambiguation. |
Experiments and Evaluations | available on the web site.6 This frame data was induced from the BNC and consists of 1,200 frames and 400 semantic roles . |
Related Work | (2012) extended the model of Titov and Klementiev (2012), which is an unsupervised model for inducing semantic roles, to jointly induce semantic roles and frames across verbs using the Chinese Restaurant Process (Aldous, 1985). |