Abstract | We explore the extent to which high-resource manual annotations such as treebanks are necessary for the task of semantic role labeling (SRL). |
Approaches | A typical pipeline consists of a POS tagger, dependency parser, and semantic role labeler . |
Approaches | Dependency-based semantic role labeling can be described as a simple structured prediction problem: the predicted structure is a labeled directed graph, where nodes correspond to words in the sentence. |
Approaches | Semantic Dependency Model As described above, semantic role labeling can be cast as a structured prediction problem where the structure is a labeled semantic dependency graph. |
Discussion and Future Work | We have compared various approaches for low-resource semantic role labeling at the state-of-the-art level. |
Experiments | To compare to prior work (i.e., submissions to the CoNLL-2009 Shared Task), we also consider the joint task of semantic role labeling and predicate sense disambiguation. |
Introduction | The goal of semantic role labeling (SRL) is to identify predicates and arguments and label their semantic contribution in a sentence. |
Related Work | Our work builds upon research in both semantic role labeling and unsupervised grammar induction (Klein and Manning, 2004; Spitkovsky et a1., 2010a). |
Related Work | Previous related approaches to semantic role labeling include joint classification of semantic arguments (Toutanova et a1., 2005; J 0-hansson and Nugues, 2008), latent syntax induction (Boxwell et a1., 2011; Naradowsky et a1., 2012), and feature engineering for SRL (Zhao et a1., 2009; Bjorkelund et a1., 2009). |
Related Work | (2013) extend this idea by coupling predictions of a dependency parser with predictions from a semantic role labeler . |
Abstract | Semantic role labels are the representation of the grammatically relevant aspects of a sentence meaning. |
Abstract | In this paper, we compare two annotation schemes, PropBank and VerbNet, in a task-independent, general way, analysing how well they fare in capturing the linguistic generalisations that are known to hold for semantic role labels , and consequently how well they grammaticalise aspects of meaning. |
Introduction | Semantic role labels are the representation of the grammatically relevant aspects of a sentence meaning. |
Introduction | The annotated PropBank corpus, and therefore implicitly its role labels inventory, has been largely adopted in NLP because of its exhaustiveness and because it is coupled with syntactic annotation, properties that make it very attractive for the automatic leam-ing of these roles and their further applications to NLP tasks. |
Introduction | (2007) show that augmenting PropB ank labels with VerbNet labels increases generalisation of the less frequent labels, such as ARG2, to new verbs and new domains, they also show that PropBank labels perform better overall, in a semantic role labelling task. |
Materials and Method | Verbal predicates in the Penn Treebank (PTB) receive a label REL and their arguments are annotated with abstract semantic role labels A0-A5 or AA for those complements of the predicative verb that are considered arguments, while those complements of the verb labelled with a semantic functional label in the original PTB receive the composite semantic role label AM-X, where X stands for labels such as LOC, TMP or ADV, for locative, temporal and adverbial modifiers respectively. |
Abstract | In this paper we describe an unsupervised method for semantic role induction which holds promise for relieving the data acquisition bottleneck associated with supervised role labelers . |
Abstract | By combining role induction with a rule-based component for argument identification we obtain an unsupervised end-to-end semantic role labeling system. |
Conclusions | Coupled with a rule-based component for automatically identifying argument candidates our split-merge algorithm forms an end-to-end system that is capable of inducing role labels without any supervision. |
Experimental Setup | Although the dataset provides annotations for verbal and nominal predicate-argument constructions, we only considered the former, following previous work on semantic role labeling (Marquez et al., 2008). |
Experimental Setup | This baseline has been previously used as point of comparison by other unsupervised semantic role labeling systems (Grenager and Manning, 2006; Lang and Lapata, 2010) and shown difficult to outperform. |
Introduction | Indeed, the analysis produced by existing semantic role labelers has been shown to benefit a wide spectrum of applications ranging from information extraction (Surdeanu et al., 2003) and question answering (Shen and Lapata, 2007), to machine translation (Wu and Fung, 2009) and summarization (Melli et al., 2005). |
Introduction | mantic role labeling as a supervised learning problem. |
Introduction | Unfortunately, the reliance on role-annotated data which is expensive and time-consuming to produce for every language and domain, presents a major bottleneck to the widespread application of semantic role labeling . |
Learning Setting | We follow the general architecture of supervised semantic role labeling systems. |
Related Work | Swier and Stevenson (2004) induce role labels with a bootstrapping scheme where the set of labeled instances is iteratively expanded using a classifier trained on previously labeled instances. |
Abstract | Semantic Role Labeling (SRL) has become one of the standard tasks of natural language processing and proven useful as a source of information for a number of other applications. |
Background and Motivation | Semantic role labeling has proven useful in many natural language processing tasks, such as question answering (Shen and Lapata, 2007; Kaisser and Webber, 2007), textual entailment (Sammons et al., 2009), machine translation (Wu and Fung, 2009; Liu and Gildea, 2010; Gao and Vogel, 2011) and dialogue systems (Basili et al., 2009; van der Plas et al., 2009). |
Background and Motivation | A number of approaches to the construction of semantic role labeling models for new languages |
Background and Motivation | In this work we construct a shared feature representation for a pair of languages, employing cross-lingual representations of syntactic and lexical information, train a semantic role labeling model on one language and apply it to the other one. |
Conclusion | We have considered the cross-lingual model transfer approach as applied to the task of semantic role labeling and observed that for closely related languages it performs comparably to annotation projection approaches. |
Related Work | Unsupervised semantic role labeling methods (Lang and Lapata, 2010; Lang and Lapata, 2011; Titov and Klementiev, 2012a; Lorenzo and Cerisara, 2012) also constitute an alternative to cross-lingual model transfer. |
Setup | The purpose of the study is not to develop a yet another semantic role labeling system — any existing SRL system can (after some modification) be used in this setup — but to assess the practical applicability of cross-lingual model transfer to this problem, compare it against the alternatives and identify its strong/weak points depending on a particular setup. |
Setup | 2.1 Semantic Role Labeling Model |
Setup | We consider the dependency-based version of semantic role labeling as described in Hajic et al. |
Abstract | Additionally, we report strong results on PropBank-style semantic role labeling in comparison to prior work. |
Conclusion | Finally, we presented results on PropBank-style semantic role labeling with a system that included the task of automatic verb frame identification, in tune with the FrameNet literature; we believe that such a system produces more interpretable output, both from the perspective of human understanding as well as downstream applications, than pipelines that are oblivious to the verb frame, only focusing on argument analysis. |
Introduction | Most work on frame-semantic parsing has usually divided the task into two major subtasks: frame identification, namely the disambiguation of a given predicate to a frame, and argument identification (or semantic role labeling ), the analysis of words and phrases in the sentential context that satisfy the frame’s semantic roles (Das et al., 2010; Das et al., 2014).1 Here, we focus on the first subtask of frame identification for given predicates; we use our novel method (§3) in conjunction with a standard argument identification model (§4) to perform full frame-semantic parsing. |
Introduction | Second, we present results on PropBank-style semantic role labeling (Palmer et al., 2005; Meyers et al., 2004; Marquez et al., 2008), that approach strong baselines, and are on par with prior state of the art (Punyakanok et al., 2008). |
Overview | 2004; Carreras and Marquez, 2005) on PropBank semantic role labeling (SRL), it has been treated as an important NLP problem. |
Overview | PropBank The PropBank project (Palmer et al., 2005) is another popular resource related to semantic role labeling . |
Overview | Generic core role labels (of which there are seven, namely A0-A5 and AA) for the verb frames are marked in the figure.3 A key difference between the two annotation systems is that PropBank uses a local frame inventory, where frames are predicate-specific. |
Abstract | We apply our simplification system to semantic role labeling (SRL). |
Experiments | We evaluated our system using the setup of the Conll 2005 semantic role labeling task.2 Thus, we trained on Sections 2-21 of PropBank and used Section 24 as development data. |
Introduction | In semantic role labeling (SRL), given a sentence containing a target verb, we want to label the semantic arguments, or roles, of that verb. |
Introduction | Current semantic role labeling systems rely primarily on syntactic features in order to identify and |
Introduction | Specifically, we train our model discriminatively to predict the correct role labeling assignment given an input sentence, treating the simplification as a hidden variable. |
Labeling Simple Sentences | to 25$“, obtaining a set of possible role labelings . |
Labeling Simple Sentences | Also, for a sentence 3 there may be several simple labelings that lead to the same role labeling . |
Probabilistic Model | This allows us to learn that “give” has a preference for the labeling {ARGO = Subject NP, ARGI = Postverb NP2, ARGZ = Postverb NP1 Our final features are analogous to those used in semantic role labeling , but greatly simplified due to our use of simple sentences: head word of the constituent; category (i.e., constituent label); and position in the simple sentence. |
Related Work | Another area of related work in the semantic role labeling literature is that on tree kernels (Moschitti, 2004; Zhang et al., 2007). |
Abstract | In this paper we focus on the parsing and argument-identification steps that precede Semantic Role Labeling (SRL) training. |
Abstract | The results show that proposed shallow representations of sentence structure are robust to reductions in parsing accuracy, and that the contribution of alternative representations of sentence structure to successful semantic role labeling varies with the integrity of the parsing and argument-identification stages. |
Conclusion and Future Work | have the luxury of treating part-of-speech tagging and semantic role labeling as separable tasks. |
Introduction | In this paper we present experiments with an automatic system for semantic role labeling (SRL) that is designed to model aspects of human language acquisition. |
Introduction | n Semantic Role Labeling |
Introduction | Previous computational experiments with a system for automatic semantic role labeling (BabySRL: (Connor et al., 2008)) showed that it is possible to learn to assign basic semantic roles based on the shallow sentence representations proposed by the structure-mapping view. |
Model | We model language learning as a Semantic Role Labeling (SRL) task (Carreras and Marquez, 2004). |
Model | The stages are: (l) Parsing the sentence, (2) Identifying potential predicates and arguments based on the parse, (3) Classifying role labels for each potential argument relative to a predicate, (4) Applying constraints to find the best labeling of arguments for a sentence. |
Model | The SRL classifier starts with noisy largely unsupervised argument identification, and receives feedback based on annotation in the PropBank style; in training, each word identified as an argument receives the true role label of the phrase that word is part of. |
Abstract | Semantic role labeling techniques are typically trained on newswire text, and in tests their performance on fiction is as much as 19% worse than their performance on newswire text. |
Abstract | We investigate techniques for building open-domain semantic role labeling systems that approach the ideal of a train-once, use-anywhere system. |
Abstract | We leverage recently-developed techniques for learning representations of text using latent-variable language models, and extend these techniques to ones that provide the kinds of features that are useful for semantic role labeling . |
Introduction | In recent semantic role labeling (SRL) competitions such as the shared tasks of CoNLL 2005 and CoNLL 2008, supervised SRL systems have been trained on newswire text, and then tested on both an in-domain test set (Wall Street Journal text) and an out-of-domain test set (fiction). |
Introduction | We test our open-domain semantic role labeling system using data from the CoNLL 2005 shared task (Carreras and Marquez, 2005). |
Introduction | Owing to the established difficulty of the Brown test set and the different domains of the Brown test and WSJ training data, this dataset makes for an excellent testbed for open-domain semantic role labeling . |
Abstract | We present the results of evaluating translation utility by measuring the accuracy within a semantic role labeling (SRL) framework. |
Abstract | Finally, we show that replacing the human semantic role labelers with an automatic shallow semantic parser in our proposed metric yields an approximation that is about 80% as closely correlated with human judgment as HTER, at an even lower cost—and is still far better correlated than n-gram based evaluation metrics. |
Abstract | Table 7: Inter-annotator agreement rate on role classification (matching of role label associated with matched word span) |
Assumptions | Finally, following the finding by Gertner and Fisher (2012) that children interpret intransitives with conjoined subjects as transitives, this work assumes that semantic roles have a one-to-one correspondence with nouns in a sentence (similarly used as a soft constraint in the semantic role labelling work of Titov and Klementiev, 2012). |
Background | to acquire semantic role labelling while still exhibiting 1-1 role bias. |
Comparison to BabySRL | The acquisition of semantic role labelling (SRL) by the BabySRL model (Connor et al., 2008; Connor et al., 2009; Connor et al., 2010) bears many similarities to the current work and is, to our knowledge, the only comparable line of inquiry to the current one. |
Comparison to BabySRL | The primary function of BabySRL is to model the acquisition of semantic role labelling While making an idiosyncratic error Which infants also make (Gertner and Fisher, 2012), the 1-1 role bias error (John and Mary gorped interpreted as J ohn go'r’ped M a'r’y). |
Comparison to BabySRL | (2008) demonstrate that a supervised perceptron classifier, based on positional features and trained on the silver role label annotations of the BabySRL corpus, manifests 1-1 role bias errors. |
Discussion | Training significantly improves role labelling in the case of object-extractions, which improves the overall accuracy of the model. |
Evaluation | These annotations were obtained by automatically semantic role labelling portions of CHILDES with the system of Punyakanok et al. |
Abstract | Current Semantic Role Labeling technologies are based on inductive algorithms trained over large scale repositories of annotated examples. |
Empirical Analysis | The aim of the evaluation is to measure the reachable accuracy of the simple model proposed and to compare its impact over in-domain and out-of-domain semantic role labeling tasks. |
Introduction | Semantic Role Labeling (SRL) is the task of automatic recognition of individual predicates together with their major roles (e.g. |
Introduction | Semantic Role Labeling |
Introduction | More recently, the state-of-art frame-based semantic role labeling system discussed in (Johansson and Nugues, 2008b) reports a 19% drop in accuracy for the argument classification task when a different test domain is targeted (i.e. |
Related Work | First local models are applied to produce role labels over individual arguments, then the joint model is used to decide the entire argument sequence among the set of the n-best competing solutions. |
Abstract | This paper presents an empirical study on the robustness and generalization of two alternative role sets for semantic role labeling : PropBank numbered roles and VerbNet thematic roles. |
Conclusion and Future work | Assuming that application-based scenarios would prefer dealing with general thematic role labels , we explore the best way to label a text with VerbNet thematic roles, namely, by training directly on VerbNet roles or by using the PropBank SRL system and performing a posterior mapping into thematic roles. |
Corpora and Semantic Role Sets | Each verb has a frameset listing its allowed role labels and mapping each numbered role to an English-language description of its semantics. |
Experimental Setting 3.1 Datasets | Our basic Semantic Role Labeling system represents the tagging problem as a Maximum Entropy Markov Model (MEMM). |
Introduction | Semantic Role Labeling is the problem of analyzing clause predicates in open text by identifying arguments and tagging them with semantic labels indicating the role they play with respect to the verb. |
Introduction | Second, assuming that application scenarios would prefer dealing with general thematic role labels , we explore the best way to label a text with thematic roles, namely, by training directly on VerbNet roles or by using the PropBank SRL system and perform a posterior mapping into thematic roles. |
Abstract | This paper presents a novel deterministic algorithm for implicit Semantic Role Labeling . |
Conclusions and Future Work | In this work we have presented a robust deterministic approach for implicit Semantic Role Labeling . |
Conclusions and Future Work | As input it only needs the document with explicit semantic role labeling and Super-Sense annotations. |
Introduction | Traditionally, Semantic Role Labeling (SRL) systems have focused in searching the fillers of those explicit roles appearing within sentence boundaries (Gildea and Jurafsky, 2000, 2002; Carreras and Marquez, 2005; Surdeanu et al., 2008; Hajic et al., 2009). |
Introduction | for Implicit Semantic Role Labelling |
Related Work | SEMAFOR (Chen et al., 2010) is a supervised system that extended an existing semantic role labeler to enlarge the search window to other sentences, replacing the features defined for regular arguments with two new semantic features. |
Abstract | In our experiments, using the NLP tasks of semantic role labeling and entity-relation extraction, we demonstrate that with the margin-based algorithm, we need to call the inference engine only for a third of the test examples. |
Conclusion | We show via experiments that these methods individually give a reduction in the number of calls made to an inference engine for semantic role labeling and entity-relation extraction. |
Experiments and Results | We report the performance of inference on two NLP tasks: semantic role labeling and the task of extracting entities and relations from text. |
Experiments and Results | Semantic Role Labeling (SRL) Our first task is that of identifying arguments of verbs in a sentence and annotating them with semantic roles (Gildea and Jurafsky, 2002; Palmer et al., 2010) . |
Experiments and Results | For the semantic role labeling task, we need to call the solver only for one in six examples while for the entity-relations task, only one in four examples require a solver call. |
Introduction | We evaluate the two schemes and their combination on two NLP tasks where the output is encoded as a structure: PropBank semantic role labeling (Punyakanok et al., 2008) and the problem of recognizing entities and relations in text (Roth and Yih, 2007; Kate and Mooney, 2010). |
Abstract | Our findings suggest that selectional preferences have potential for improving a full system for Semantic Role Labeling . |
Experimental Setting | The test set contains 4,134 pairs (covering 505 different predicates) to be classified into the appropriate role label . |
Introduction | Semantic Role Labeling (SRL) systems usually approach the problem as a sequence of two subtasks: argument identification and classification. |
Introduction | This first step allows us to analyze the potential of selectional preferences as a source of semantic knowledge for discriminating among different role labels . |
Selectional Preference Models | Given a target sentence where a predicate and several potential argument and adjunct head words occur, the goal is to assign a role label to each of the head words. |
Conclusions | This work adds unsupervised semantic role labeling to the list of NLP tasks benefiting from the crosslingual induction setting. |
Introduction | Semantic role labeling (SRL) (Gildea and Juraf-sky, 2002) involves predicting predicate argument structure, i.e. |
Introduction | For example, in our sentences (a) and (b) representing so-called blame alternation (Levin, 1993), the same information is conveyed in two different ways and a successful model of semantic role labeling needs to learn the corresponding linkings from the data. |
Problem Definition | As we mentioned in the introduction, in this work we focus on the labeling stage of semantic role labeling . |
Problem Definition | In sum, we treat the unsupervised semantic role labeling task as clustering of argument keys. |
Approach to Semantic Representation of Negation | Role labels (A0, MTMP, etc.) |
Approach to Semantic Representation of Negation | Before annotation began, all semantic information was removed by mapping all role labels to ARG. |
Learning Algorithm | Because PropBank adds semantic role annotation on top of the Penn TreeB ank, we have available syntactic annotation and semantic role labels for all instances. |
Negation in Natural Language | State-of-the-art semantic role labelers (e.g., the ones trained over PropBank) do not completely represent the meaning of negated statements. |
Negation in Natural Language | For all statements s, current role labelers would only encode it is not the case that s. However, examples (1—7) |
Background | This paper addresses two areas of work in event semantics, narrative event chains and semantic role labeling . |
Background | 2.2 Semantic Role Labeling |
Background | Most work on semantic role labeling , however, is supervised, using Propbank (Palmer et al., 2005), FrameNet (Baker et al., 1998) or VerbNet (Kipper et al., 2000) as gold standard roles and training data. |
Discussion | Our argument learning algorithm not only performs unsupervised induction of situation-specific role classes, but the resulting roles and linking structures may also offer the possibility of (unsupervised) FrameNet-style semantic role labeling . |
Frames and Roles | Most previous work on unsupervised semantic role labeling assumes that the set of possible |
Abstract | We describe a semantic role labeling system that makes primary use of CCG-based features. |
Abstract | This analysis also suggests that simultaneous incremental parsing and semantic role labeling may lead to performance gains in both tasks. |
Introduction | Semantic Role Labeling (SRL) is the process of assigning semantic roles to strings of words in a sentence according to their relationship to the semantic predicates expressed in the sentence. |
Introduction | An effective semantic role labeling system must recognize the differences between different configurations: |
Results | The results for gold standard parses are comparable to the winning system of the CoNLL 2005 shared task on semantic role labeling (Punyakanok et al., 2008). |
Related Work | MEANT (Lo et al., 2012), which is the weighted f-score over the matched semantic role labels of the automatically aligned semantic frames and role fillers, that outperforms BLEU, NIST, METEOR, WER, CDER and TER in correlation with human adequacy judgments. |
Related Work | There is a total of 12 weights for the set of semantic role labels in MEANT as defined in Lo and Wu (2011b). |
Related Work | For UMEANT (Lo and Wu, 2012), they are estimated in an unsupervised manner using relative frequency of each semantic role label in the references and thus UMEANT is useful when human judgments on adequacy of the development set are unavailable. |
XMEANT: a cross-lingual MEANT | The weights can also be estimated in unsupervised fashion using the relative frequency of each semantic role label in the foreign input, as in UMEANT. |
Abstract | In the described system, semantic role labels of source sentences are used in a domain-independent manner to generate both questions and answers related to the source sentence. |
Approach | SENNA provides the tokenizing, pos tagging, syntactic constituency parsing and semantic role labeling used in the system. |
Approach | SENNA produces separate semantic role labels for each predicate in the sentence. |
Linguistic Challenges | For example in: Plant roots and bacterial decay use carbon dioxide in the process of respiration, the word use was classified as NN, leaving no predicate and no semantic role labels in this sentence. |
Related Work | (2013), which used semantic role labeling to identify patterns in the source text from which questions can be generated. |
Abstract | A number of studies have presented machine-learning approaches to semantic role labeling with availability of corpora such as FrameNet and PropBank. |
Conclusion | We confirmed that modeling the role generalization at feature level was better than the conventional approach that replaces semantic role labels . |
Design of Role Groups | We define a role group as a set of role labels grouped by a criterion. |
Introduction | Semantic Role Labeling (SRL) is a task of analyzing predicate-argument structures in texts. |
Abstract | We model the problem as a joint dependency parsing and semantic role labeling task. |
Introduction | We model our problem as a joint dependency parsing and role labeling task, assuming a Bayesian generative process. |
Problem Formulation | We formalize the learning problem as a dependency parsing and role labeling problem. |
Problem Formulation | In addition, the role labeling problem is to assign a tag to each noun phrase in a specification tree, indicating whether the phrase is a key phrase or a background phrase. |
Class Analyses | Since dictionary publishers have not previously devoted much effort in analyzing preposition behavior, we believe PDEP may serve an important role, particularly for various NLP applications in which semantic role labeling is important. |
Class Analyses | We expect that desired improvements will come from usage in various NLP tasks, particularly word-sense disambiguation and semantic role labeling . |
Introduction | (2013); Srikumar and Roth (2011)) have shown the value of prepositional phrases in joint modeling with verbs for semantic role labeling . |
See http://clg.wlv.ac.uk/proiects/DVC | The occurrence of these invalid instances provides an opportunity for improving taggers, parsers, and semantic role labelers . |
Abstract | The task of distinguishing between the two has strong relations to various basic NLP tasks such as syntactic parsing, semantic role labeling and subcategorization acquisition. |
Core-Adjunct in Previous Work | Semantic Role Labeling . |
Introduction | Distinguishing between the two argument types has been discussed extensively in various formulations in the NLP literature, notably in PP attachment, semantic role labeling (SRL) and subcategorization acquisition. |
Experimental Setup | Documents are processed by a full NLP pipeline, including token and sentence segmentation, parsing, semantic role labeling , and an information extraction pipeline consisting of mention detection, NP coreference, cross-document resolution, and relation detection (Florian et al., 2004; Luo et al., 2004; Luo and Zitouni, 2005). |
The Framework | Cross-document coreference resolution, semantic role labeling and relation extraction are accomplished Via the methods described in Section 5. |
The Framework | semantic role label |
Experiments | We then pass the parses to a Chinese semantic role labeler (Li et al., 2010), trained on the Chinese PropBank 3.0 (Xue and Palmer, 2009), to annotate semantic roles for all verbal predicates (part-of-speech tag VV, VE, or VC). |
Experiments | tactic parsing and semantic role labeling on the Chinese sentences, then train the models by using MaxEnt toolkit with L1 regularizer (Tsuruoka et al., 2009).3 Table 3 shows the reordering type distribution over the training data. |
Related Work | Finally in the postprocessing approach category, Wu and Fung (2009) performed semantic role labeling on translation output and reordered arguments to maximize the cross-lingual match of the semantic frames between the source sentence and the target translation. |
Noun predicate-argument structure | The only information we are not specifying in the syntactic analysis are the role labels assigned to each of the syntactic arguments. |
Noun predicate-argument structure | Our analysis requires semantic role labels for each argument of the nominal predicates in the Penn Treebank — precisely what NomBank (Meyers et al., 2004) provides. |
Noun predicate-argument structure | We then assume that any prepositional phrase or genitive determiner annotated as a core argument in NomBank should be analysed as a complement, while peripheral arguments and adnominals that receive no semantic role label at all are analysed as adjuncts. |