Index of papers in Proc. ACL that mention
  • fine-grained
Wu, Xianchao and Matsuzaki, Takuya and Tsujii, Jun'ichi
Abstract
In this paper, we propose to use deep syntactic information for obtaining fine-grained translation rules.
Abstract
A head-driven phrase structure grammar (HPSG) parser is used to obtain the deep syntactic information, which includes a fine-grained description of the syntactic property and a semantic representation of a sentence.
Abstract
We extract fine-grained rules from aligned HPSG tree/forest-string pairs and use them in our tree-to-string and string-to-tree systems.
Introduction
But, “VBN(killed)” is indeed separable into two fine-grained tree fragments of “VBN(killedzactive)” and “VBN(killed:passive)”1.
Introduction
This motivates our proposal of using deep syntactic information to obtain a fine-grained translation rule set.
Introduction
deep syntactic information of an English sentence, which includes a fine-grained description of the syntactic property and a semantic representation of the sentence.
Related Work
Before describing our approaches of applying deep syntactic information yielded by an HPSG parser for fine-grained rule extraction, we would like to briefly review what kinds of deep syntactic information have been employed for SMT.
Related Work
fine-grained tree-to-string rule extraction, rather than string-to-string translation (Hassan et al., 2007; Birch et al., 2007).
fine-grained is mentioned in 19 sentences in this paper.
Topics mentioned in this paper:
Veale, Tony and Li, Guofu
Divergent (Re)Categorization
To find the stable properties that can underpin a meaningful fine-grained category for cowboy, we must seek out the properties that are so often presupposed to be salient of all cowboys that one can use them to anchor a simile, such as "swaggering like a cowboy” or “as grizzled as a cowboy”.
Divergent (Re)Categorization
Since each hit will also yield a value for S via the wildcard *, and a fine-grained category PS for C, we use this approach here to harvest fine-grained categories from the web from most of our similes.
Divergent (Re)Categorization
After 2 cycles we acquire 43 categories; after 3 cycles, 72; after 4 cycles, 93; and after 5 cycles, we acquire 102 fine-grained perspectives on cola, such as stimu-lating-drink and corrosive-substance.
Measuring and Creating Similarity
We also want any fine-grained perspective M-H to influence our similarity metric, provided it can be coherently tied into WordNet as a shared hypemym of the two lexical concepts being compared.
Measuring and Creating Similarity
The denominator in (2) denotes the sum total of the size of all fine-grained categories that can be coherently added to WordNet for any term.
Measuring and Creating Similarity
For a shared dimension H in the feature vectors of concepts C1 and C2, if at least one fine-grained perspective M-H has been added to WordNet between H and C1 and between H and C2, then the value of dimension H for C1 and for C2 is given by (4):
Related Work and Ideas
A fine-grained category hierarchy permits fine-grained similarity judgments, and though WordNet is useful, its sense hierarchies are not especially fine-grained .
Related Work and Ideas
However, we can automatically make WordNet subtler and more discerning, by adding new fine-grained categories to unite lexical concepts whose similarity is not reflected by any existing categories.
Related Work and Ideas
Veale (2003) shows how a property that is found in the glosses of two lexical concepts, of the same depth, can be combined with their LCS to yield a new fine-grained parent category, so e.g.
fine-grained is mentioned in 17 sentences in this paper.
Topics mentioned in this paper:
Li, Linlin and Roth, Benjamin and Sporleder, Caroline
Abstract
The proposed models are tested on three different tasks: coarse-grained word sense disambiguation, fine-grained word sense disambiguation, and detection of literal vs. nonliteral usages of potentially idiomatic expressions.
Experimental Setup
We evaluate our models on three different tasks: coarse-grained WSD, fine-grained WSD and literal vs. nonliteral sense detection.
Experimental Setup
To determine whether our model is also suitable for fine-grained WSD, we test on the data provided by Pradhan et al.
Experimental Setup
(2009) for the Semeval-2007 Task-l7 (English fine-grained all-words task).
Experiments
Table 4: Model performance (F-score) for the fine-grained word sense disambiguation task.
Experiments
5.2 Fine-grained WSD
Experiments
Fine-grained WSD, however, is a more difficult task.
Introduction
We apply these models to coarse- and fine-grained WSD and find that they outperform comparable systems for both tasks.
fine-grained is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Hartmann, Silvana and Gurevych, Iryna
Discussion: a Multilingual FrameNet
Also, fine-grained sense and frame distinctions may be more relevant in one language than in another language.
Discussion: a Multilingual FrameNet
We however find lower performance for verbs in a fine-grained setting.
Discussion: a Multilingual FrameNet
We argue that an improved alignment algorithm, for instance taking subcategorization information into account, can identify the fine-grained distinctions.
FrameNet — Wiktionary Alignment
The verb senses are very fine-grained and thus present a difficult alignment task.
FrameNet — Wiktionary Alignment
A number of false positives occur because the gold standard was developed in a very fine-grained manner: distinctions such as causative vs. inchoa-tive (enlarge: become large vs. enlarge: make large) were explicitly stressed in the definitions and thus annotated as different senses by the annotators.
Intermediate Resource FNWKxx
Because sense granularity was an issue in the error analysis, we considered two alignment decisions: (a) fine-grained alignment: the two glosses describe the same sense; (b) coarse-grained alignment.
Intermediate Resource FNWKxx
The precision for the fine-grained (a) is lower than the allover precision on the gold standard.
fine-grained is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Nakashole, Ndapandula and Tylenda, Tomasz and Weikum, Gerhard
Evaluation
HYENA (Hierarchical tYpe classification for Entity NAmes), the method of (Yosef 2012), based on a feature-rich classifier for fine-grained , hierarchical type tagging.
Introduction
Our aim is for all recognized and newly discovered entities to be semantically interpretable by having fine-grained types that connect them to KB classes.
Introduction
For informative knowledge, new entities must be typed in a fine-grained manner (e.g., guitar player, blues band, concert, as opposed to crude types like person, organization, event).
Introduction
Therefore, our setting resembles the established task of fine-grained typing for noun phrases (Fleis-chmann 2002), with the difference being that we disregard common nouns and phrases for prominent in-KB entities and instead exclusively focus on the difficult case of phrases that likely denote new entities.
Related Work
There is fairly little work on fine-grained typing, notable results being (Fleischmann 2002; Rahman 2010; Ling 2012; Yosef 2012).
Related Work
These methods consider type taxonomies similar to the one used for PEARL, consisting of several hundreds of fine-grained types.
fine-grained is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Yang, Min and Zhu, Dingju and Chow, Kam-Pui
Abstract
The model uses a minimal set of domain-independent seed words as prior knowledge to discover a domain-specific lexicon, learning a fine-grained emotion lexicon much richer and adaptive to a specific domain.
Abstract
By comprehensive experiments, we show that our model can generate a high-quality fine-grained domain-specific emotion lexicon.
Conclusions and Future Work
In this paper, we have presented a novel emotion-aware LDA model that is able to quickly build a fine-grained domain-specific emotion lexicon for languages without many manually constructed resources.
Experiments
The experimental results show that our algorithm can successfully construct a fine-grained domain-specific emotion lexicon for this corpus that is able to understand the connotation of the words that may not be obvious without the context.
Introduction
As the fine-grained annotated data are expensive to get, the unsupervised approaches are preferred and more used in reality.
Introduction
Usually, a high quality emotion lexicon play a significant role when apply the unsupervised approaches for fine-grained emotion classification.
Introduction
The results demonstrate that our EaLDA model improves the quality and the coverage of state-of-the-art fine-grained lexicon.
fine-grained is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Chan, Yee Seng and Roth, Dan
Experiments
Out of these, 4,011 are positive relation examples annotated with 6 coarse-grained relation types and 22 fine-grained relation types5 .
Experiments
We similarly build a fine-grained classifier to disambiguate between 45 relation labels.
Experiments
We built one binary, one coarse-grained, and one fine-grained classifier for each fold.
fine-grained is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Li, Sujian and Wang, Liang and Cao, Ziqiang and Li, Wenjie
Add arc <eC,ej> to GC with
One is composed of 19 coarse-grained relations and the other 111 fine-grained relations6.
Add arc <eC,ej> to GC with
From Table 3 and Table 4, we can see that the addition of more feature types, except the 6th feature type (semantic similarity), can promote the performance of relation labeling, whether using the coarse-grained 19 relations and the fine-grained 111 relations.
Add arc <eC,ej> to GC with
Table 5 selects 10 features with the highest weights in absolute value for the parser which uses the coarse-grained relations, while Table 6 selects the top 10 features for the parser using the fine-grained relations.
Discourse Dependency Structure and Tree Bank
A total of 110 fine-grained relations (e.g.
fine-grained is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Huang, Hongzhao and Cao, Yunbo and Huang, Xiaojiang and Ji, Heng and Lin, Chin-Yew
Abstract
To tackle these challenges, we propose a novel semi-supervised graph regularization model to incorporate both local and global evidence from multiple tweets through three fine-grained relations.
Conclusions
By studying three novel fine-grained relations, detecting semantically-related information with semantic meta paths, and exploiting the data manifolds in both unlabeled and labeled data for collective inference, our work can dramatically save annotation cost and achieve better performance, thus shed light on the challenging wikification task for tweets.
Experiments
Our full model SSRegulgg achieves significant improvement over the supervised baseline (5% absolute Fl gain with 95.0% confidence level by the Wilcoxon Matched-Pairs Signed-Ranks Test), showing that incorporating global evidence from multiple tweets with fine-grained relations is beneficial.
Introduction
In order to construct a semantic-rich graph capturing the similarity between mentions and concepts for the model, we introduce three novel fine-grained relations based on a set of local features, social networks and meta paths.
Related Work
Our method is a collective approach with the following novel advancements: (i) A novel graph representation with fine-grained relations, (ii) A unified framework based on meta paths to explore richer relevant context, (iii) Joint identification and linking of mentions under semi-supervised setting.
Related Work
We introduce a novel graph that incorporates three fine-grained relations.
fine-grained is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Yang, Bishan and Cardie, Claire
Abstract
This paper addresses the task of fine-grained opinion extraction — the identification of opinion-related entities: the opinion expressions, the opinion holders, and the targets of the opinions, and the relations between opinion expressions and their targets and holders.
Experiments
For evaluation, we used version 2.0 of the MPQA corpus (Wiebe et al., 2005; Wilson, 2008), a widely used data set for fine-grained opinion analysis.6 We considered the subset of 482 documents7 that contain attitude and target annotations.
Introduction
Fine-grained opinion analysis is concerned with identifying opinions in text at the expression level; this includes identifying the subjective (i.e., opinion) expression itself, the opinion holder and the target of the opinion (Wiebe et al., 2005).
Introduction
Not surprisingly, fine-grained opinion extraction is a challenging task due to the complexity and variety of the language used to express opinions and their components (Pang and Lee, 2008).
Introduction
We evaluate our approach using a standard corpus for fine-grained opinion analysis (the MPQA corpus (Wiebe et al., 2005)) and demonstrate that our model outperforms by a significant margin traditional baselines that do not employ joint inference for extracting opinion entities and different types of opinion relations.
Related Work
Significant research effort has been invested into fine-grained opinion extraction for open-domain text such as news articles (Wiebe et al., 2005; Wilson et al., 2009).
fine-grained is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Yao, Limin and Riedel, Sebastian and McCallum, Andrew
Evaluations
Since our system predicts fine-grained clusters comparing against Freebase relations, the measure of recall is underestimated.
Evaluations
Since our systems predict more fine-grained clusters than
Introduction
fine-grained entity types of two arguments, to handle polysemy.
Introduction
It is difficult to discover a high-quality set of fine-grained entity types due to unknown criteria for developing such a set.
Introduction
In this paper we address the problem of polysemy, while we circumvent the problem of finding fine-grained entity types.
Related Work
They cluster arguments to fine-grained entity types and rank the associations of a relation with these entity types to discover selectional preferences.
fine-grained is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Cai, Peng and Gao, Wei and Zhou, Aoying and Wong, Kam-Fai
Abstract
The second measures the similarity between the source query and each target query, and then combines these fine-grained similarity values for its importance estimation.
Conclusion
The second measures the similarity between a source query and each target query, and then combine the fine-grained similarity values to estimate its importance to target domain.
Evaluation
By contrast, more accurate query weights can be achieved by the more fine-grained similarity measure between the source query and all target queries in algorithm 2.
Evaluation
fine-grained similarity values.
Query Weighting
more precise measures of query similarity by utilizing the more fine-grained classification hyperplane for separating the queries of two domains.
fine-grained is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Reschke, Kevin and Vogel, Adam and Jurafsky, Dan
Evaluation
Note that the information gain agent starts dialogs with the top-level and appropriate subcategory questions, so it is only for longer dialogs that the fine-grained aspects boost performance.
Generating Questions from Reviews
To identify these subcategories, we run Latent Dirichlet Analysis (LDA) (Blei et al., 2003) on the reviews of each set of businesses in the twenty most common top-level categories, using 10 topics and concatenating all of a business’s reviews into one document.2 Several researchers have used sentence-level documents to model topics in reviews, but these tend to generate topics about fine-grained aspects of the sort we discuss in Section 2.2 (Jo and Oh, 2011; Brody and Elhadad, 2010).
Generating Questions from Reviews
2.2 Questions from Fine-Grained Aspects
Introduction
The framework makes use of techniques from topic modeling and sentiment-based aspect extraction to identify fine-grained attributes for each business.
fine-grained is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Socher, Richard and Bauer, John and Manning, Christopher D. and Andrew Y., Ng
Introduction
However, recent work has shown that parsing results can be greatly improved by defining more fine-grained syntactic
Introduction
This gives a fine-grained notion of semantic similarity, which is useful for tackling problems like ambiguous attachment decisions.
Introduction
The former can capture the discrete categorization of phrases into NP or PP while the latter can capture fine-grained syntactic and compositional-semantic information on phrases and words.
fine-grained is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Beigman Klebanov, Beata and Flor, Michael
Application to Essay Scoring
This fine-grained scale resulted in higher mean pairwise inter-rater correlations than the traditional integer-only scale (r=0.79 vs around r=0.70 for the operational scoring).
Application to Essay Scoring
This dataset provides a very fine-grained ranking of the essays, with almost no two essays getting exactly the same score.
Application to Essay Scoring
This is a very competitive baseline, as e-rater features explain more than 70% of the variation in essay scores on a relatively coarse scale (setA) and more than 80% of the variation in scores on a fine-grained scale (setB).
Methodology
We chose a relatively fine-grained binning and performed no optimization for grid selection; for more sophisticated gridding approaches to study nonlinear relationships in the data, see Reshef et al.
fine-grained is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Green, Spence and DeNero, John
A Class-based Model of Agreement
After segmentation, we tag each segment with a fine-grained morpho-syntactic class.
Discussion of Translation Results
Finally, +POS+Agr shows the class-based model with the fine-grained classes (e. g., “Noun+Fem+S g”).
Experiments
For training the tagger, we automatically converted the ATE morphological analyses to the fine-grained class set.
Introduction
We address this shortcoming with an agreement model that scores sequences of fine-grained morpho-syntactic classes.
fine-grained is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Chambers, Nathanael
Experiments and Results
Not surprisingly, the fine-grained performance is quite a bit lower than the core relations.
Learning Time Constraints
We also experiment with 7 fine-grained relations:
Learning Time Constraints
Obviously the more fine-grained a relation, the better it can inform a classifier.
Learning Time Constraints
We use a similar function for the seven fine-grained relations.
fine-grained is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
LIU, Xiaohua and ZHANG, Shaodian and WEI, Furu and ZHOU, Ming
Abstract
The KNN based classifier conducts pre-labeling to collect global coarse evidence across tweets while the CRF model conducts sequential labeling to capture fine-grained information encoded in a tweet.
Introduction
Following the two-stage prediction aggregation methods (Krishnan and Manning, 2006), such pre-labeled results, together with other conventional features used by the state-of-the-art NER systems, are fed into a linear Conditional Random Fields (CRF) (Lafferty et al., 2001) model, which conducts fine-grained tweet level NER.
Our Method
Our model is hybrid in the sense that a KNN classifier and a CRF model are sequentially applied to the target tweet, with the goal that the KNN classifier captures global coarse evidence while the CRF model fine-grained information encoded in a single tweet and in the gazetteers.
Our Method
model, which is good at encoding the subtle interactions between words and their labels, compensates for KNN’s incapability to capture fine-grained evidence involving multiple decision points.
fine-grained is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Fu, Ruiji and Guo, Jiang and Qin, Bing and Che, Wanxiang and Wang, Haifeng and Liu, Ting
Background
Such hierarchies have good structures and high accuracy, but their coverage is limited to fine-grained concepts (e.g., “Ranunculaceae” is not included in WordNet.).
Conclusion and Future Work
Further improvements are made using a cluster-based approach in order to model the more fine-grained relations.
Method
hyponym word pairs in our training data and visualize them.2 Figure 2 shows that the relations are adequately distributed in the clusters, which implies that hypernym—hyponym relations indeed can be decomposed into more fine-grained relations.
Results and Analysis 5.1 Varying the Amount of Clusters
Some fine-grained relations exist in Wikipedia, but the coverage is limited.
fine-grained is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Kalchbrenner, Nal and Grefenstette, Edward and Blunsom, Phil
Experiments
Classifier Fine-grained (%) Binary (%)
Experiments
Likewise, in the fine-grained case, we use the standard 8544/1101/2210 splits.
Experiments
The DCNN for the fine-grained result has the same architecture, but the filters have size 10 and 7, the top pooling parameter k is 5 and the number of maps is, respectively, 6 and 12.
Properties of the Sentence Model
For most applications and in order to learn fine-grained feature detectors, it is beneficial for a model to be able to discriminate whether a specific n-gram occurs in the input.
fine-grained is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Yang, Bishan and Cardie, Claire
Approach
The differences are: (l) we encode the coreference relations as soft constraints during learning instead of applying them as hard constraints during inference time; (2) our constraints can apply to both polar and non-polar sentences; (3) our identification of coreference relations is automatic without any fine-grained annotations for opinion targets.
Introduction
Accordingly, extracting sentiment at the fine-grained level (e. g. at the sentence- or phrase-level) has received increasing attention recently due to its challenging nature and its importance in supporting these opinion analysis tasks (Pang and Lee, 2008).
Introduction
However, the discourse relations were obtained from fine-grained annotations and implemented as hard constraints on polarity.
Introduction
Obtaining sentiment labels at the fine-grained level is costly.
fine-grained is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Agirre, Eneko and Baldwin, Timothy and Martinez, David
Experimental setting
We experiment with both full synsets and SFs as instances of fine-grained and coarse-grained semantic representation, respectively.
Integrating Semantics into Parsing
The more fine-grained our semantic representation, the higher the average polysemy and the greater the need to distinguish between these senses.
Integrating Semantics into Parsing
Disambiguating each word relative to its context of use becomes increasingly difficult for fine-grained representations (Palmer et al., 2006).
Results
We hypothesise that this is due to the avoidance of excessive fragmentation, as occurs with fine-grained senses.
fine-grained is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Zhang, Zhe and Singh, Munindar P.
Experiments
We posit that EDUs are too fine-grained for sentiment analysis.
Introduction
However, these changes can be successfully exploited for inferring fine-grained sentiments.
Introduction
Segments can be shorter than sentences and therefore help capture fine-grained sentiments.
fine-grained is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Scheible, Christian and Schütze, Hinrich
Introduction
In our approach, we classify sentences as S-(non)relevant because this is the most fine-grained level at which S-relevance manifests itself; at the word or phrase level, S-relevance classification is not possible because of scope and context effects.
Related Work
Our work is most closely related to (Taboada et al., 2009) who define a fine-grained classification that is similar to sentiment relevance on the highest level.
Related Work
Tackstro'm and McDonald (2011) develop a fine-grained annotation scheme that includes S-nonrelevance as one of five categories.
fine-grained is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
McDonald, Ryan and Nivre, Joakim and Quirmbach-Brundage, Yvonne and Goldberg, Yoav and Das, Dipanjan and Ganchev, Kuzman and Hall, Keith and Petrov, Slav and Zhang, Hao and Täckström, Oscar and Bedini, Claudia and Bertomeu Castelló, Núria and Lee, Jungmee
Towards A Universal Treebank
This mainly consisted in relabeling dependency relations and, due to the fine-grained label set used in the Swedish Treebank (Teleman, 1974), this could be done with high precision.
Towards A Universal Treebank
Making fine-grained label distinctions was discouraged.
Towards A Universal Treebank
Such a reduction may ultimately be necessary also in the case of dependency relations, but since most of our data sets were created through manual annotation, we could afford to retain a fine-grained analysis, knowing that it is always possible to map from finer to coarser distinctions, but not vice versa.4
fine-grained is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Sauper, Christina and Haghighi, Aria and Barzilay, Regina
Introduction
Specifically, we are interested in identifying fine-grained product properties across reviews (e.g., battery life for electronics or pizza for restaurants) as well as capturing attributes of these properties, namely aggregate user sentiment.
Problem Formulation
Property: A property corresponds to some fine-grained aspect of a product.
Related Work
While our model captures similar high-level intuition, it analyzes fine-grained properties expressed at the snippet level, rather than document-level sentiment.
fine-grained is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Wu, Zhili and Markert, Katja and Sharoff, Serge
Experiments
We use standard classification accuracy (Acc) on the most fine-grained level of target categories in the genre hierarchy.
Introduction
This paper explores a way of using information on the hierarchy of labels for improving fine-grained genre classification.
Structural SVMs
Let x be a document and wm a weight vector associated with the genre class m in a corpus with k genres at the most fine-grained level.
fine-grained is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Tratz, Stephen and Hovy, Eduard
Conclusion
In this paper, we present a novel, fine-grained taxonomy of 43 noun-noun semantic relations, the largest annotated noun compound dataset yet created, and a supervised classification method for automatic noun compound interpretation.
Evaluation
The .57-.67 H figures achieved by the Voted annotations compare well with previously reported inter-annotator agreement figures for noun compounds using fine-grained taxonomies.
Introduction
In this paper, we present a large, fine-grained taxonomy of 43 noun compound relations, a dataset annotated according to this taxonomy, and a supervised, automatic classification method for determining the relation between the head and modifier words in a noun compound.
fine-grained is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Fowler, Timothy A. D. and Penn, Gerald
A Latent Variable CCG Parser
The Petrov parser uses latent variables to refine a coarse-grained grammar extracted from a training corpus to a grammar which makes much more fine-grained syntactic distinctions.
A Latent Variable CCG Parser
in Petrov’s experiments on the Penn treebank, the syntactic category NP was refined to the more fine-grained N P1 and N P2 roughly corresponding to N Ps in subject and object positions.
A Latent Variable CCG Parser
However, this fine-grained control is exactly what the Petrov parser does automatically.
fine-grained is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Duan, Xiangyu and Zhang, Min and Li, Haizhou
Abstract
But word appears to be too fine-grained in some cases such as non-compositional phrasal equivalences, where no clear word alignments exist.
Conclusion
It is proposed to replace too fine-grained word as basic translational unit.
Introduction
But there is a deficiency in such manner that word is too fine-grained in some cases such as non-compositional phrasal equivalences, where clear word alignments do not exist.
fine-grained is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Abney, Steven and Bird, Steven
Human Language Project
We postulate that interlinear glossed text is sufficiently fine-grained to serve our purposes.
Human Language Project
All documents will be included in primary form, but the percentage of documents with manual annotation, or manually corrected annotation, decreases at increasingly fine-grained levels of annotation.
Human Language Project
Where manual fine-grained annotation is unavailable, automatic methods for creating it (at a lower quality) are desirable.
fine-grained is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Tofiloski, Milan and Brooke, Julian and Taboada, Maite
Data and Evaluation
Since Sundance clauses are also too fine-grained for our purposes, we use a few simple rules to collapse clauses that are unlikely to meet our definition of EDU.
Data and Evaluation
A clearer example that illustrates the pitfalls of fine-grained discourse segmenting is shown in the following output from SPADE:
Introduction*
The segments produced by a parser, however, are too fine-grained for discourse purposes, breaking off complement and other clauses that are not in a discourse relation to any other segment.
fine-grained is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Zapirain, Beñat and Agirre, Eneko and Màrquez, Llu'is
Conclusion and Future work
We also tried to map the fine-grained VerbNet roles into coarser roles, but it did not yield better results than the mapping from PropBank roles.
Mapping into VerbNet Thematic Roles
But if we compare them to the results of the PropBank to VerbNet mapping, where we simply substitute the fine-grained roles by their corresponding groups, we see that they still lag behind (second row in Table 6).
On the Generalization of Role Sets
In the case of VerbNet, the more fine-grained distinction among roles seems to depend more on the meaning of the predicate.
fine-grained is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Schulte im Walde, Sabine and Hying, Christian and Scheible, Christian and Schmid, Helmut
Related Work
On the one hand, their model is asymmetric, thus not giving the same interpretation power to verbs and arguments; on the other hand, the model provides a more fine-grained clustering for nouns, in the form of an additional hierarchical structure of the noun clusters.
Verb Class Model 2.1 Probabilistic Model
A model with a large number of fine-grained concepts as selectional preferences assigns a higher likelihood to the data than a model with a small number of general concepts, because in general a larger number of parameters is better in describing training data.
Verb Class Model 2.1 Probabilistic Model
Consequently, the EM algorithm a priori prefers fine-grained concepts but — due to sparse data problems — tends to overfit the training data.
fine-grained is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: