Divergent (Re)Categorization | To find the stable properties that can underpin a meaningful fine-grained category for cowboy, we must seek out the properties that are so often presupposed to be salient of all cowboys that one can use them to anchor a simile, such as "swaggering like a cowboy” or “as grizzled as a cowboy”. |
Divergent (Re)Categorization | Since each hit will also yield a value for S via the wildcard *, and a fine-grained category PS for C, we use this approach here to harvest fine-grained categories from the web from most of our similes. |
Divergent (Re)Categorization | After 2 cycles we acquire 43 categories; after 3 cycles, 72; after 4 cycles, 93; and after 5 cycles, we acquire 102 fine-grained perspectives on cola, such as stimu-lating-drink and corrosive-substance. |
Measuring and Creating Similarity | We also want any fine-grained perspective M-H to influence our similarity metric, provided it can be coherently tied into WordNet as a shared hypemym of the two lexical concepts being compared. |
Measuring and Creating Similarity | The denominator in (2) denotes the sum total of the size of all fine-grained categories that can be coherently added to WordNet for any term. |
Measuring and Creating Similarity | For a shared dimension H in the feature vectors of concepts C1 and C2, if at least one fine-grained perspective M-H has been added to WordNet between H and C1 and between H and C2, then the value of dimension H for C1 and for C2 is given by (4): |
Related Work and Ideas | A fine-grained category hierarchy permits fine-grained similarity judgments, and though WordNet is useful, its sense hierarchies are not especially fine-grained . |
Related Work and Ideas | However, we can automatically make WordNet subtler and more discerning, by adding new fine-grained categories to unite lexical concepts whose similarity is not reflected by any existing categories. |
Related Work and Ideas | Veale (2003) shows how a property that is found in the glosses of two lexical concepts, of the same depth, can be combined with their LCS to yield a new fine-grained parent category, so e.g. |
Discussion: a Multilingual FrameNet | Also, fine-grained sense and frame distinctions may be more relevant in one language than in another language. |
Discussion: a Multilingual FrameNet | We however find lower performance for verbs in a fine-grained setting. |
Discussion: a Multilingual FrameNet | We argue that an improved alignment algorithm, for instance taking subcategorization information into account, can identify the fine-grained distinctions. |
FrameNet — Wiktionary Alignment | The verb senses are very fine-grained and thus present a difficult alignment task. |
FrameNet — Wiktionary Alignment | A number of false positives occur because the gold standard was developed in a very fine-grained manner: distinctions such as causative vs. inchoa-tive (enlarge: become large vs. enlarge: make large) were explicitly stressed in the definitions and thus annotated as different senses by the annotators. |
Intermediate Resource FNWKxx | Because sense granularity was an issue in the error analysis, we considered two alignment decisions: (a) fine-grained alignment: the two glosses describe the same sense; (b) coarse-grained alignment. |
Intermediate Resource FNWKxx | The precision for the fine-grained (a) is lower than the allover precision on the gold standard. |
Evaluation | HYENA (Hierarchical tYpe classification for Entity NAmes), the method of (Yosef 2012), based on a feature-rich classifier for fine-grained , hierarchical type tagging. |
Introduction | Our aim is for all recognized and newly discovered entities to be semantically interpretable by having fine-grained types that connect them to KB classes. |
Introduction | For informative knowledge, new entities must be typed in a fine-grained manner (e.g., guitar player, blues band, concert, as opposed to crude types like person, organization, event). |
Introduction | Therefore, our setting resembles the established task of fine-grained typing for noun phrases (Fleis-chmann 2002), with the difference being that we disregard common nouns and phrases for prominent in-KB entities and instead exclusively focus on the difficult case of phrases that likely denote new entities. |
Related Work | There is fairly little work on fine-grained typing, notable results being (Fleischmann 2002; Rahman 2010; Ling 2012; Yosef 2012). |
Related Work | These methods consider type taxonomies similar to the one used for PEARL, consisting of several hundreds of fine-grained types. |
Abstract | This paper addresses the task of fine-grained opinion extraction — the identification of opinion-related entities: the opinion expressions, the opinion holders, and the targets of the opinions, and the relations between opinion expressions and their targets and holders. |
Experiments | For evaluation, we used version 2.0 of the MPQA corpus (Wiebe et al., 2005; Wilson, 2008), a widely used data set for fine-grained opinion analysis.6 We considered the subset of 482 documents7 that contain attitude and target annotations. |
Introduction | Fine-grained opinion analysis is concerned with identifying opinions in text at the expression level; this includes identifying the subjective (i.e., opinion) expression itself, the opinion holder and the target of the opinion (Wiebe et al., 2005). |
Introduction | Not surprisingly, fine-grained opinion extraction is a challenging task due to the complexity and variety of the language used to express opinions and their components (Pang and Lee, 2008). |
Introduction | We evaluate our approach using a standard corpus for fine-grained opinion analysis (the MPQA corpus (Wiebe et al., 2005)) and demonstrate that our model outperforms by a significant margin traditional baselines that do not employ joint inference for extracting opinion entities and different types of opinion relations. |
Related Work | Significant research effort has been invested into fine-grained opinion extraction for open-domain text such as news articles (Wiebe et al., 2005; Wilson et al., 2009). |
Application to Essay Scoring | This fine-grained scale resulted in higher mean pairwise inter-rater correlations than the traditional integer-only scale (r=0.79 vs around r=0.70 for the operational scoring). |
Application to Essay Scoring | This dataset provides a very fine-grained ranking of the essays, with almost no two essays getting exactly the same score. |
Application to Essay Scoring | This is a very competitive baseline, as e-rater features explain more than 70% of the variation in essay scores on a relatively coarse scale (setA) and more than 80% of the variation in scores on a fine-grained scale (setB). |
Methodology | We chose a relatively fine-grained binning and performed no optimization for grid selection; for more sophisticated gridding approaches to study nonlinear relationships in the data, see Reshef et al. |
Evaluation | Note that the information gain agent starts dialogs with the top-level and appropriate subcategory questions, so it is only for longer dialogs that the fine-grained aspects boost performance. |
Generating Questions from Reviews | To identify these subcategories, we run Latent Dirichlet Analysis (LDA) (Blei et al., 2003) on the reviews of each set of businesses in the twenty most common top-level categories, using 10 topics and concatenating all of a business’s reviews into one document.2 Several researchers have used sentence-level documents to model topics in reviews, but these tend to generate topics about fine-grained aspects of the sort we discuss in Section 2.2 (Jo and Oh, 2011; Brody and Elhadad, 2010). |
Generating Questions from Reviews | 2.2 Questions from Fine-Grained Aspects |
Introduction | The framework makes use of techniques from topic modeling and sentiment-based aspect extraction to identify fine-grained attributes for each business. |
Introduction | However, recent work has shown that parsing results can be greatly improved by defining more fine-grained syntactic |
Introduction | This gives a fine-grained notion of semantic similarity, which is useful for tackling problems like ambiguous attachment decisions. |
Introduction | The former can capture the discrete categorization of phrases into NP or PP while the latter can capture fine-grained syntactic and compositional-semantic information on phrases and words. |
Towards A Universal Treebank | This mainly consisted in relabeling dependency relations and, due to the fine-grained label set used in the Swedish Treebank (Teleman, 1974), this could be done with high precision. |
Towards A Universal Treebank | Making fine-grained label distinctions was discouraged. |
Towards A Universal Treebank | Such a reduction may ultimately be necessary also in the case of dependency relations, but since most of our data sets were created through manual annotation, we could afford to retain a fine-grained analysis, knowing that it is always possible to map from finer to coarser distinctions, but not vice versa.4 |
Introduction | In our approach, we classify sentences as S-(non)relevant because this is the most fine-grained level at which S-relevance manifests itself; at the word or phrase level, S-relevance classification is not possible because of scope and context effects. |
Related Work | Our work is most closely related to (Taboada et al., 2009) who define a fine-grained classification that is similar to sentiment relevance on the highest level. |
Related Work | Tackstro'm and McDonald (2011) develop a fine-grained annotation scheme that includes S-nonrelevance as one of five categories. |