Relational Similarity Experiments | We further experimented with the SemEval’07 task 4 dataset (Girju et al., 2007), where each example consists of a sentence, a target semantic relation, two nominals to be judged on whether they are in that relation, manually annotated WordNet senses, and the Web query used to obtain the sentence: |
Relational Similarity Experiments | The SemEval competition defines four types of systems, depending on whether the manually annotated WordNet senses and the Google query are used: A (WordNet=no, Query=no), B (WordNet=yes, Query=no), C (WordNet=no, Query=yes), and D (WordNet=yes, Query=yes). |
Relational Similarity Experiments | We experimented with types A and C only since we believe that having the manually annotated WordNet sense keys is an unrealistic assumption for a real-world application. |
Factors Affecting System Performance | 0 A balanced corpus of 800 manually annotated sentences extracted from 83 newspaper texts |
Factors Affecting System Performance | 200 sentences from this corpus (100 positive and 100 negative) were also randomly selected from the corpus for an inter-annotator agreement study and were manually annotated by two independent annotators. |
Lexicon-Based Approach | In order to assign the membership score to each word, we did 58 system runs on unique nonintersecting seed lists drawn from manually annotated list of positive and negative adjectives from (Hatzivassiloglou and McKeown, 1997). |
Introduction | Some of the teams have used the manually annotated WN labels provided with the dataset, and some have not. |
Related Work | In this paper we do not use any manually annotated resources apart from the classification training set. |
Related Work | This manually annotated dataset includes a representative rather than exhaustive list of 7 important nominal relationships. |