Index of papers in Proc. ACL 2014 that mention
  • fine-grained
Yang, Min and Zhu, Dingju and Chow, Kam-Pui
Abstract
The model uses a minimal set of domain-independent seed words as prior knowledge to discover a domain-specific lexicon, learning a fine-grained emotion lexicon much richer and adaptive to a specific domain.
Abstract
By comprehensive experiments, we show that our model can generate a high-quality fine-grained domain-specific emotion lexicon.
Conclusions and Future Work
In this paper, we have presented a novel emotion-aware LDA model that is able to quickly build a fine-grained domain-specific emotion lexicon for languages without many manually constructed resources.
Experiments
The experimental results show that our algorithm can successfully construct a fine-grained domain-specific emotion lexicon for this corpus that is able to understand the connotation of the words that may not be obvious without the context.
Introduction
As the fine-grained annotated data are expensive to get, the unsupervised approaches are preferred and more used in reality.
Introduction
Usually, a high quality emotion lexicon play a significant role when apply the unsupervised approaches for fine-grained emotion classification.
Introduction
The results demonstrate that our EaLDA model improves the quality and the coverage of state-of-the-art fine-grained lexicon.
fine-grained is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Huang, Hongzhao and Cao, Yunbo and Huang, Xiaojiang and Ji, Heng and Lin, Chin-Yew
Abstract
To tackle these challenges, we propose a novel semi-supervised graph regularization model to incorporate both local and global evidence from multiple tweets through three fine-grained relations.
Conclusions
By studying three novel fine-grained relations, detecting semantically-related information with semantic meta paths, and exploiting the data manifolds in both unlabeled and labeled data for collective inference, our work can dramatically save annotation cost and achieve better performance, thus shed light on the challenging wikification task for tweets.
Experiments
Our full model SSRegulgg achieves significant improvement over the supervised baseline (5% absolute Fl gain with 95.0% confidence level by the Wilcoxon Matched-Pairs Signed-Ranks Test), showing that incorporating global evidence from multiple tweets with fine-grained relations is beneficial.
Introduction
In order to construct a semantic-rich graph capturing the similarity between mentions and concepts for the model, we introduce three novel fine-grained relations based on a set of local features, social networks and meta paths.
Related Work
Our method is a collective approach with the following novel advancements: (i) A novel graph representation with fine-grained relations, (ii) A unified framework based on meta paths to explore richer relevant context, (iii) Joint identification and linking of mentions under semi-supervised setting.
Related Work
We introduce a novel graph that incorporates three fine-grained relations.
fine-grained is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Li, Sujian and Wang, Liang and Cao, Ziqiang and Li, Wenjie
Add arc <eC,ej> to GC with
One is composed of 19 coarse-grained relations and the other 111 fine-grained relations6.
Add arc <eC,ej> to GC with
From Table 3 and Table 4, we can see that the addition of more feature types, except the 6th feature type (semantic similarity), can promote the performance of relation labeling, whether using the coarse-grained 19 relations and the fine-grained 111 relations.
Add arc <eC,ej> to GC with
Table 5 selects 10 features with the highest weights in absolute value for the parser which uses the coarse-grained relations, while Table 6 selects the top 10 features for the parser using the fine-grained relations.
Discourse Dependency Structure and Tree Bank
A total of 110 fine-grained relations (e.g.
fine-grained is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Fu, Ruiji and Guo, Jiang and Qin, Bing and Che, Wanxiang and Wang, Haifeng and Liu, Ting
Background
Such hierarchies have good structures and high accuracy, but their coverage is limited to fine-grained concepts (e.g., “Ranunculaceae” is not included in WordNet.).
Conclusion and Future Work
Further improvements are made using a cluster-based approach in order to model the more fine-grained relations.
Method
hyponym word pairs in our training data and visualize them.2 Figure 2 shows that the relations are adequately distributed in the clusters, which implies that hypernym—hyponym relations indeed can be decomposed into more fine-grained relations.
Results and Analysis 5.1 Varying the Amount of Clusters
Some fine-grained relations exist in Wikipedia, but the coverage is limited.
fine-grained is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Kalchbrenner, Nal and Grefenstette, Edward and Blunsom, Phil
Experiments
Classifier Fine-grained (%) Binary (%)
Experiments
Likewise, in the fine-grained case, we use the standard 8544/1101/2210 splits.
Experiments
The DCNN for the fine-grained result has the same architecture, but the filters have size 10 and 7, the top pooling parameter k is 5 and the number of maps is, respectively, 6 and 12.
Properties of the Sentence Model
For most applications and in order to learn fine-grained feature detectors, it is beneficial for a model to be able to discriminate whether a specific n-gram occurs in the input.
fine-grained is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Yang, Bishan and Cardie, Claire
Approach
The differences are: (l) we encode the coreference relations as soft constraints during learning instead of applying them as hard constraints during inference time; (2) our constraints can apply to both polar and non-polar sentences; (3) our identification of coreference relations is automatic without any fine-grained annotations for opinion targets.
Introduction
Accordingly, extracting sentiment at the fine-grained level (e. g. at the sentence- or phrase-level) has received increasing attention recently due to its challenging nature and its importance in supporting these opinion analysis tasks (Pang and Lee, 2008).
Introduction
However, the discourse relations were obtained from fine-grained annotations and implemented as hard constraints on polarity.
Introduction
Obtaining sentiment labels at the fine-grained level is costly.
fine-grained is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Zhang, Zhe and Singh, Munindar P.
Experiments
We posit that EDUs are too fine-grained for sentiment analysis.
Introduction
However, these changes can be successfully exploited for inferring fine-grained sentiments.
Introduction
Segments can be shorter than sentences and therefore help capture fine-grained sentiments.
fine-grained is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: