Experimental setting | We experiment with both full synsets and SFs as instances of fine-grained and coarse-grained semantic representation, respectively. |
Integrating Semantics into Parsing | The more fine-grained our semantic representation, the higher the average polysemy and the greater the need to distinguish between these senses. |
Integrating Semantics into Parsing | Disambiguating each word relative to its context of use becomes increasingly difficult for fine-grained representations (Palmer et al., 2006). |
Results | We hypothesise that this is due to the avoidance of excessive fragmentation, as occurs with fine-grained senses. |
Related Work | On the one hand, their model is asymmetric, thus not giving the same interpretation power to verbs and arguments; on the other hand, the model provides a more fine-grained clustering for nouns, in the form of an additional hierarchical structure of the noun clusters. |
Verb Class Model 2.1 Probabilistic Model | A model with a large number of fine-grained concepts as selectional preferences assigns a higher likelihood to the data than a model with a small number of general concepts, because in general a larger number of parameters is better in describing training data. |
Verb Class Model 2.1 Probabilistic Model | Consequently, the EM algorithm a priori prefers fine-grained concepts but — due to sparse data problems — tends to overfit the training data. |
Conclusion and Future work | We also tried to map the fine-grained VerbNet roles into coarser roles, but it did not yield better results than the mapping from PropBank roles. |
Mapping into VerbNet Thematic Roles | But if we compare them to the results of the PropBank to VerbNet mapping, where we simply substitute the fine-grained roles by their corresponding groups, we see that they still lag behind (second row in Table 6). |
On the Generalization of Role Sets | In the case of VerbNet, the more fine-grained distinction among roles seems to depend more on the meaning of the predicate. |