Experiments | Out of these, 4,011 are positive relation examples annotated with 6 coarse-grained relation types and 22 fine-grained relation types5 . |
Experiments | We similarly build a fine-grained classifier to disambiguate between 45 relation labels. |
Experiments | We built one binary, one coarse-grained, and one fine-grained classifier for each fold. |
Abstract | The second measures the similarity between the source query and each target query, and then combines these fine-grained similarity values for its importance estimation. |
Conclusion | The second measures the similarity between a source query and each target query, and then combine the fine-grained similarity values to estimate its importance to target domain. |
Evaluation | By contrast, more accurate query weights can be achieved by the more fine-grained similarity measure between the source query and all target queries in algorithm 2. |
Evaluation | fine-grained similarity values. |
Query Weighting | more precise measures of query similarity by utilizing the more fine-grained classification hyperplane for separating the queries of two domains. |
Abstract | The KNN based classifier conducts pre-labeling to collect global coarse evidence across tweets while the CRF model conducts sequential labeling to capture fine-grained information encoded in a tweet. |
Introduction | Following the two-stage prediction aggregation methods (Krishnan and Manning, 2006), such pre-labeled results, together with other conventional features used by the state-of-the-art NER systems, are fed into a linear Conditional Random Fields (CRF) (Lafferty et al., 2001) model, which conducts fine-grained tweet level NER. |
Our Method | Our model is hybrid in the sense that a KNN classifier and a CRF model are sequentially applied to the target tweet, with the goal that the KNN classifier captures global coarse evidence while the CRF model fine-grained information encoded in a single tweet and in the gazetteers. |
Our Method | model, which is good at encoding the subtle interactions between words and their labels, compensates for KNN’s incapability to capture fine-grained evidence involving multiple decision points. |
Introduction | Specifically, we are interested in identifying fine-grained product properties across reviews (e.g., battery life for electronics or pizza for restaurants) as well as capturing attributes of these properties, namely aggregate user sentiment. |
Problem Formulation | Property: A property corresponds to some fine-grained aspect of a product. |
Related Work | While our model captures similar high-level intuition, it analyzes fine-grained properties expressed at the snippet level, rather than document-level sentiment. |