Improving Parsing and PP Attachment Performance with Sense Information
Agirre, Eneko and Baldwin, Timothy and Martinez, David

Article Structure

Abstract

To date, parsers have made limited use of semantic information, but there is evidence to suggest that semantic features can enhance parse disambiguation.

Introduction

Traditionally, parse disambiguation has relied on structural features extracted from syntactic parse trees, and made only limited use of semantic information.

Background

This research is focused on applying lexical semantics in parsing and PP attachment tasks.

Integrating Semantics into Parsing

Our approach to providing the parsers with sense information is to make available the semantic denotation of each word in the form of a semantic class.

Experimental setting

We evaluate the performance of our approach in two settings: (1) full parsing, and (2) PP attachment within a full parsing context.

Results

We present the results for each disambiguation approach in turn, analysing the results for parsing and PP attachment separately.

Discussion

The results of the previous section show that the improvements in parsing results are small but significant, for all three word sense disambiguation strategies (gold-standard, 1ST and ASR).

Conclusions

In this work we have trained two state-of-the-art statistical parsers on semantically-enriched input, where content words have been substituted with their semantic classes.

Topics

gold-standard

Appears in 25 sentences as: Gold-standard (2) gold-standard (26)
In Improving Parsing and PP Attachment Performance with Sense Information
  1. We devise a gold-standard sense- and parse tree-annotated dataset based on the intersection of the Penn Treebank and SemCor, and experiment with different approaches to both semantic representation and disambiguation.
    Page 1, “Abstract”
  2. We explore a number of disambiguation strategies, including the use of hand-annotated ( gold-standard ) senses, the
    Page 1, “Introduction”
  3. These results are achieved using most frequent sense information, which surprisingly outperforms both gold-standard senses and automatic WSD.
    Page 2, “Introduction”
  4. We diverge from this norm in focusing exclusively on a sense-annotated subset of the Brown Corpus portion of the Penn Treebank, in order to investigate the upper bound performance of the models given gold-standard sense information.
    Page 2, “Background”
  5. Based on gold-standard sense information, they achieved large-scale improvements over a basic parse selection model in the context of the Hinoki treebank.
    Page 3, “Background”
  6. We experiment with different ways of tackling WSD, using both gold-standard data and automatic methods.
    Page 3, “Integrating Semantics into Parsing”
  7. One of the main requirements for our dataset is the availability of gold-standard sense and parse tree annotations.
    Page 4, “Experimental setting”
  8. The gold-standard sense annotations allow us to perform upper bound evaluation of the relative impact of a given semantic representation on parsing and PP attachment performance, to contrast with the performance in more realistic semantic disambiguation settings.
    Page 4, “Experimental setting”
  9. The gold-standard parse tree annotations are required in order to carry out evaluation of parser and PP attachment performance.
    Page 4, “Experimental setting”
  10. Over the combined gold-standard parsing dataset, our script extracted a total of 2,541 PP attachment quadruples.
    Page 4, “Experimental setting”
  11. In order to evaluate the PP attachment performance of a parser, we run our extraction script over the parser output in the same manner as for the gold-standard data, and compare the extracted quadruples to the gold-standard ones.
    Page 4, “Experimental setting”

See all papers in Proc. ACL 2008 that mention gold-standard.

See all papers in Proc. ACL that mention gold-standard.

Back to top.

semantic representation

Appears in 20 sentences as: Semantic representation (1) semantic representation (13) semantic representations (6)
In Improving Parsing and PP Attachment Performance with Sense Information
  1. We devise a gold-standard sense- and parse tree-annotated dataset based on the intersection of the Penn Treebank and SemCor, and experiment with different approaches to both semantic representation and disambiguation.
    Page 1, “Abstract”
  2. We explore several models for semantic representation , based around WordNet (Fellbaum, 1998).
    Page 1, “Introduction”
  3. In experimenting with different semantic representations , we require some strategy to disambiguate the semantic class of polysemous words in context (e. g. determining for each instance of crane whether it refers to an animal or a lifting device).
    Page 1, “Introduction”
  4. There are three main aspects that we have to consider in this process: (i) the semantic representation , (ii) semantic disambiguation, and (iii) morphology.
    Page 3, “Integrating Semantics into Parsing”
  5. The more fine-grained our semantic representation , the higher the average polysemy and the greater the need to distinguish between these senses.
    Page 3, “Integrating Semantics into Parsing”
  6. Below, we outline the dataset used in this research and the parser evaluation methodology, explain the methodology used to perform PP attachment, present the different options for semantic representation , and finally detail the disambiguation methods.
    Page 4, “Experimental setting”
  7. The gold-standard sense annotations allow us to perform upper bound evaluation of the relative impact of a given semantic representation on parsing and PP attachment performance, to contrast with the performance in more realistic semantic disambiguation settings.
    Page 4, “Experimental setting”
  8. 4.3 Semantic representation
    Page 5, “Experimental setting”
  9. We experimented with a range of semantic representations , all of which are based on WordNet 2.1.
    Page 5, “Experimental setting”
  10. We experiment with both full synsets and SFs as instances of fine-grained and coarse-grained semantic representation , respectively.
    Page 5, “Experimental setting”
  11. For each of these three semantic representations , we experimented with substituting each of: (1) all open-class POSs (nouns, verbs, adjectives and adverbs), (2) nouns only, and (3) verbs only.
    Page 5, “Experimental setting”

See all papers in Proc. ACL 2008 that mention semantic representation.

See all papers in Proc. ACL that mention semantic representation.

Back to top.

WordNet

Appears in 14 sentences as: WordNet (14) WordNets (1)
In Improving Parsing and PP Attachment Performance with Sense Information
  1. We explore several models for semantic representation, based around WordNet (Fellbaum, 1998).
    Page 1, “Introduction”
  2. RRR consists of 20,081 training and 3,097 test quadruples of the form (v, n1 , p, n2) , where the attachment decision is either v or n l. The best published results over RRR are those of Stetina and Nagao (1997), who employ WordNet sense predictions from an unsupervised WSD method within a decision tree classifier.
    Page 2, “Background”
  3. (2005) experimented with first-sense and hypernym features from HowNet and CiLin (both WordNets for Chinese) in a generative parse model applied to the Chinese Penn Treebank.
    Page 3, “Background”
  4. Our choice for this work was the WordNet 2.1 lexical database, in which synonyms are grouped into synsets, which are then linked via an ISA hierarchy.
    Page 3, “Integrating Semantics into Parsing”
  5. WordNet contains other types of relations such as meronymy, but we did not use them in this research.
    Page 3, “Integrating Semantics into Parsing”
  6. mallet, square and steel-wool pad are also descendants of TOOL in WordNet , none of which would conventionally be used as the manner adjunct of cat).
    Page 3, “Integrating Semantics into Parsing”
  7. 1In WordNet 2.1, knife and scissors are sister synsets, both of which have TOOL as their 4th hypernym.
    Page 3, “Integrating Semantics into Parsing”
  8. We experimented with a range of semantic representations, all of which are based on WordNet 2.1.
    Page 5, “Experimental setting”
  9. As mentioned above, words in WordNet are organised into sets of synonyms, called synsets.
    Page 5, “Experimental setting”
  10. Note that these are the two extremes of semantic granularity in WordNet , and we plan to experiment with intermediate representation levels in future research (c.f.
    Page 5, “Experimental setting”
  11. Table 1: A selection of WordNet SFs
    Page 5, “Experimental setting”

See all papers in Proc. ACL 2008 that mention WordNet.

See all papers in Proc. ACL that mention WordNet.

Back to top.

word sense

Appears in 12 sentences as: word sense (12)
In Improving Parsing and PP Attachment Performance with Sense Information
  1. This demonstrates that word sense information can indeed enhance the performance of syntactic disambiguation.
    Page 1, “Abstract”
  2. use of the most frequent sense, and an unsupervised word sense disambiguation (WSD) system.
    Page 2, “Introduction”
  3. We provide the first definitive results that word sense information can enhance Penn Treebank parser performance, building on earlier results of Bikel (2000) and Xiong et al.
    Page 2, “Introduction”
  4. That we specifically present results for PP attachment in a parsing context is a combination of us supporting the new research direction for PP attachment established by Atterer and Schutze, and us wishing to reinforce the findings of Stetina and Nagao that word sense information significantly enhances PP attachment performance in this new setting.
    Page 2, “Background”
  5. There have been a number of attempts to incorporate word sense information into parsing tasks.
    Page 2, “Background”
  6. The only successful applications of word sense information to parsing that we are aware of are Xiong et al.
    Page 3, “Background”
  7. The combination of word sense and first-level hypernyms produced a significant improvement over their basic model.
    Page 3, “Background”
  8. (2007) extended this work in implementing a discriminative parse selection model incorporating word sense information mapped onto upper-level ontologies of differing depths.
    Page 3, “Background”
  9. Other notable examples of the successful incorporation of lexical semantics into parsing, not through word sense information but indirectly via selectional preferences, are Dowding et al.
    Page 3, “Background”
  10. This problem of identifying the correct sense of a word in context is known as word sense disambiguation (WSD: Agirre and Edmonds (2006)).
    Page 3, “Integrating Semantics into Parsing”
  11. We use Bikel’s randomized parsing evaluation comparator3 (with p < 0.05 throughout) to test the statistical significance of the results using word sense information, relative to the respective baseline parser using only lexical features.
    Page 4, “Experimental setting”

See all papers in Proc. ACL 2008 that mention word sense.

See all papers in Proc. ACL that mention word sense.

Back to top.

Treebank

Appears in 12 sentences as: Treebank (8) treebank (4)
In Improving Parsing and PP Attachment Performance with Sense Information
  1. We devise a gold-standard sense- and parse tree-annotated dataset based on the intersection of the Penn Treebank and SemCor, and experiment with different approaches to both semantic representation and disambiguation.
    Page 1, “Abstract”
  2. Our approach to exploring the impact of lexical semantics on parsing performance is to take two state-of-the-art statistical treebank parsers and pre-process the inputs variously.
    Page 1, “Introduction”
  3. This simple method allows us to incorporate semantic information into the parser without having to reimplement a full statistical parser, and also allows for maximum comparability with existing results in the treebank parsing community.
    Page 1, “Introduction”
  4. We provide the first definitive results that word sense information can enhance Penn Treebank parser performance, building on earlier results of Bikel (2000) and Xiong et al.
    Page 2, “Introduction”
  5. Traditionally, the two parsers have been trained and evaluated over the WSJ portion of the Penn Treebank (PTB: Marcus et al.
    Page 2, “Background”
  6. We diverge from this norm in focusing exclusively on a sense-annotated subset of the Brown Corpus portion of the Penn Treebank , in order to investigate the upper bound performance of the models given gold-standard sense information.
    Page 2, “Background”
  7. most closely related research is that of Bikel (2000), who merged the Brown portion of the Penn Treebank with SemCor (similarly to our approach in Section 4.1), and used this as the basis for evaluation of a generative bilexical model for joint WSD and parsing.
    Page 3, “Background”
  8. (2005) experimented with first-sense and hypernym features from HowNet and CiLin (both WordNets for Chinese) in a generative parse model applied to the Chinese Penn Treebank .
    Page 3, “Background”
  9. Based on gold-standard sense information, they achieved large-scale improvements over a basic parse selection model in the context of the Hinoki treebank .
    Page 3, “Background”
  10. The only publicly-available resource with these two characteristics at the time of this work was the subset of the Brown Corpus that is included in both SemCor (Landes et al., 1998) and the Penn Treebank (PTB).2 This provided the basis of our dataset.
    Page 4, “Experimental setting”
  11. 2OntoNotes (Hovy et al., 2006) includes large-scale treebank and (selective) sense data, which we plan to use for future experiments when it becomes fully available.
    Page 4, “Experimental setting”

See all papers in Proc. ACL 2008 that mention Treebank.

See all papers in Proc. ACL that mention Treebank.

Back to top.

lexical semantics

Appears in 11 sentences as: lexical semantic (3) Lexical semantics (1) lexical semantics (7)
In Improving Parsing and PP Attachment Performance with Sense Information
  1. Our approach to exploring the impact of lexical semantics on parsing performance is to take two state-of-the-art statistical treebank parsers and pre-process the inputs variously.
    Page 1, “Introduction”
  2. Given our simple procedure for incorporating lexical semantics into the parsing process, our hope is that this research will open the door to further gains using more sophisticated parsing models and richer semantic options.
    Page 2, “Introduction”
  3. This research is focused on applying lexical semantics in parsing and PP attachment tasks.
    Page 2, “Background”
  4. Lexical semantics in parsing
    Page 2, “Background”
  5. Other notable examples of the successful incorporation of lexical semantics into parsing, not through word sense information but indirectly via selectional preferences, are Dowding et al.
    Page 3, “Background”
  6. With any lexical semantic resource, we have to be careful to choose the appropriate level of granularity for a given task: if we limit ourselves to synsets we will not be able to capture broader gen-eralisations, such as the one between knife and scissors;1 on the other hand by grouping words related at a higher level in the hierarchy we could find that we make overly coarse groupings (e.g.
    Page 3, “Integrating Semantics into Parsing”
  7. The performance gain obtained here is larger than in parsing, which is in accordance with the findings of Stetina and Nagao that lexical semantics has a considerable effect on PP attachment
    Page 6, “Results”
  8. The fact that the improvement is larger for PP attachment than for full parsing is suggestive of PP attachment being a parsing subtask where lexical semantic information is particularly important, supporting the findings of Stetina and Nagao (1997) over a standalone PP attachment task.
    Page 7, “Discussion”
  9. Our hope is that this paper serves as the bridgehead for a new line of research into the impact of lexical semantics on parsing.
    Page 8, “Discussion”
  10. This simple method allows us to incorporate lexical semantic information into the parser, without having to reimplement a full statistical parser.
    Page 8, “Conclusions”
  11. The results are highly significant in demonstrating that a simplistic approach to incorporating lexical semantics into a parser significantly improves parser performance.
    Page 8, “Conclusions”

See all papers in Proc. ACL 2008 that mention lexical semantics.

See all papers in Proc. ACL that mention lexical semantics.

Back to top.

synsets

Appears in 10 sentences as: synset (3) synsets (7)
In Improving Parsing and PP Attachment Performance with Sense Information
  1. Our choice for this work was the WordNet 2.1 lexical database, in which synonyms are grouped into synsets , which are then linked via an ISA hierarchy.
    Page 3, “Integrating Semantics into Parsing”
  2. With any lexical semantic resource, we have to be careful to choose the appropriate level of granularity for a given task: if we limit ourselves to synsets we will not be able to capture broader gen-eralisations, such as the one between knife and scissors;1 on the other hand by grouping words related at a higher level in the hierarchy we could find that we make overly coarse groupings (e.g.
    Page 3, “Integrating Semantics into Parsing”
  3. 1In WordNet 2.1, knife and scissors are sister synsets , both of which have TOOL as their 4th hypernym.
    Page 3, “Integrating Semantics into Parsing”
  4. As mentioned above, words in WordNet are organised into sets of synonyms, called synsets .
    Page 5, “Experimental setting”
  5. Each synset in turn belongs to a unique semantic file (SF).
    Page 5, “Experimental setting”
  6. We experiment with both full synsets and SFs as instances of fine-grained and coarse-grained semantic representation, respectively.
    Page 5, “Experimental setting”
  7. As an example of the difference in these two representations, knife in its tool sense is in the EDGE TOOL USED AS A CUTTING INSTRUMENT singleton synset , and also in the ARTIFACT SF along with thousands of other words including cutter.
    Page 5, “Experimental setting”
  8. In the case of SFs, we perform full synset WSD based on one of the above options, and then map the prediction onto the corresponding (unique) SF.
    Page 5, “Experimental setting”
  9. In this case, synsets slightly outperform SF.
    Page 7, “Results”
  10. tween the two extremes of full synsets and SFs.
    Page 8, “Discussion”

See all papers in Proc. ACL 2008 that mention synsets.

See all papers in Proc. ACL that mention synsets.

Back to top.

Penn Treebank

Appears in 8 sentences as: Penn Treebank (8)
In Improving Parsing and PP Attachment Performance with Sense Information
  1. We devise a gold-standard sense- and parse tree-annotated dataset based on the intersection of the Penn Treebank and SemCor, and experiment with different approaches to both semantic representation and disambiguation.
    Page 1, “Abstract”
  2. We provide the first definitive results that word sense information can enhance Penn Treebank parser performance, building on earlier results of Bikel (2000) and Xiong et al.
    Page 2, “Introduction”
  3. Traditionally, the two parsers have been trained and evaluated over the WSJ portion of the Penn Treebank (PTB: Marcus et al.
    Page 2, “Background”
  4. We diverge from this norm in focusing exclusively on a sense-annotated subset of the Brown Corpus portion of the Penn Treebank , in order to investigate the upper bound performance of the models given gold-standard sense information.
    Page 2, “Background”
  5. most closely related research is that of Bikel (2000), who merged the Brown portion of the Penn Treebank with SemCor (similarly to our approach in Section 4.1), and used this as the basis for evaluation of a generative bilexical model for joint WSD and parsing.
    Page 3, “Background”
  6. (2005) experimented with first-sense and hypernym features from HowNet and CiLin (both WordNets for Chinese) in a generative parse model applied to the Chinese Penn Treebank .
    Page 3, “Background”
  7. The only publicly-available resource with these two characteristics at the time of this work was the subset of the Brown Corpus that is included in both SemCor (Landes et al., 1998) and the Penn Treebank (PTB).2 This provided the basis of our dataset.
    Page 4, “Experimental setting”
  8. As far as we know, these are the first results over both WordNet and the Penn Treebank to show that semantic processing helps parsing.
    Page 8, “Conclusions”

See all papers in Proc. ACL 2008 that mention Penn Treebank.

See all papers in Proc. ACL that mention Penn Treebank.

Back to top.

significant improvement

Appears in 7 sentences as: significant improvement (3) significant improvements (3) significantly improves (1)
In Improving Parsing and PP Attachment Performance with Sense Information
  1. This paper shows that semantic classes help to obtain significant improvement in both parsing and PP attachment tasks.
    Page 1, “Abstract”
  2. This paper shows that semantic classes help to obtain significant improvements for both PP attachment and parsing.
    Page 2, “Introduction”
  3. The results are notable in demonstrating that very simple preprocessing of the parser input facilitates significant improvements in parser performance.
    Page 2, “Introduction”
  4. The combination of word sense and first-level hypernyms produced a significant improvement over their basic model.
    Page 3, “Background”
  5. The results are similar to IST, with significant improvements for verbs.
    Page 7, “Results”
  6. This paper shows that semantic classes achieve significant improvement both on full parsing and PP attachment tasks relative to the baseline parsers.
    Page 8, “Conclusions”
  7. The results are highly significant in demonstrating that a simplistic approach to incorporating lexical semantics into a parser significantly improves parser performance.
    Page 8, “Conclusions”

See all papers in Proc. ACL 2008 that mention significant improvement.

See all papers in Proc. ACL that mention significant improvement.

Back to top.

statistically significant

Appears in 6 sentences as: Statistical significance (1) statistical significance (2) statistically significant (3)
In Improving Parsing and PP Attachment Performance with Sense Information
  1. We use Bikel’s randomized parsing evaluation comparator3 (with p < 0.05 throughout) to test the statistical significance of the results using word sense information, relative to the respective baseline parser using only lexical features.
    Page 4, “Experimental setting”
  2. Statistical significance was calculated based on
    Page 4, “Experimental setting”
  3. These results are statistically significant in some cases (as indicated by *).
    Page 6, “Results”
  4. As in full-parsing, Bikel outperforms Charniak, but in this case the difference in the baselines is not statistically significant .
    Page 6, “Results”
  5. As was the case for parsing, the performance with IST reaches and in many instances surpasses gold-standard levels, achieving statistical significance over the baseline in places.
    Page 6, “Results”
  6. The improvement in PP attachment was larger (20.5% ERR), and also statistically significant .
    Page 7, “Discussion”

See all papers in Proc. ACL 2008 that mention statistically significant.

See all papers in Proc. ACL that mention statistically significant.

Back to top.

gold standard

Appears in 6 sentences as: Gold standard (1) gold standard (5)
In Improving Parsing and PP Attachment Performance with Sense Information
  1. This is needed to derive the fact that there are two possible attachment sites, as well as information about the lexical phrases, which are typically extracted heuristically from gold standard parses.
    Page 2, “Background”
  2. Note that there is no guarantee of agreement in the quadruple membership between the extraction script and the gold standard , as the parser may have produced a parse which is incompatible with either attachment possibility.
    Page 4, “Experimental setting”
  3. A quadruple is deemed correct if: (1) it exists in the gold standard , and (2) the attachment decision is correct.
    Page 4, “Experimental setting”
  4. Conversely, it is deemed incorrect if: (1) it exists in the gold standard , and (2) the attachment decision is incorrect.
    Page 4, “Experimental setting”
  5. Quadruples not found in the gold standard are discarded.
    Page 4, “Experimental setting”
  6. 5.1 Gold standard
    Page 6, “Results”

See all papers in Proc. ACL 2008 that mention gold standard.

See all papers in Proc. ACL that mention gold standard.

Back to top.

F-score

Appears in 6 sentences as: F-score (7)
In Improving Parsing and PP Attachment Performance with Sense Information
  1. We evaluate the parsers via labelled bracketing recall (R), precision (’P) and F-score (.731).
    Page 4, “Experimental setting”
  2. The SFU representation produces the best results for Bikel (F-score 0.010 above baseline), while for Charniak the best performance is obtained with word+SF ( F-score 0.007 above baseline).
    Page 6, “Results”
  3. Overall, Bikel obtains a superior F-score in all configurations.
    Page 6, “Results”
  4. Again, the F-score for the semantic representations is better than the baseline in all cases.
    Page 6, “Results”
  5. Bikel outperforms Charniak in terms of F-score in all cases.
    Page 7, “Results”
  6. Table 8 sum-marises the results, showing that the error reduction rate (ERR) over the parsing F-score is up to 6.9%, which is remarkable given the relatively superficial strategy for incorporating sense information into the parser.
    Page 7, “Discussion”

See all papers in Proc. ACL 2008 that mention F-score.

See all papers in Proc. ACL that mention F-score.

Back to top.

parse tree

Appears in 5 sentences as: parse tree (3) parse trees (2)
In Improving Parsing and PP Attachment Performance with Sense Information
  1. Traditionally, parse disambiguation has relied on structural features extracted from syntactic parse trees , and made only limited use of semantic information.
    Page 1, “Introduction”
  2. While a detailed description of the respective parsing models is beyond the scope of this paper, it is worth noting that both parsers induce a context free grammar as well as a generative parsing model from a training set of parse trees , and use a development set to tune internal parameters.
    Page 2, “Background”
  3. One of the main requirements for our dataset is the availability of gold-standard sense and parse tree annotations.
    Page 4, “Experimental setting”
  4. The gold-standard parse tree annotations are required in order to carry out evaluation of parser and PP attachment performance.
    Page 4, “Experimental setting”
  5. Following Atterer and Schutze (2007), we wrote a script that, given a parse tree , identifies instances of PP attachment ambiguity and outputs the (v, n1 , p, n2) quadruple involved and the attachment decision.
    Page 4, “Experimental setting”

See all papers in Proc. ACL 2008 that mention parse tree.

See all papers in Proc. ACL that mention parse tree.

Back to top.

parsing models

Appears in 5 sentences as: parse model (1) parsing model (1) parsing models (4)
In Improving Parsing and PP Attachment Performance with Sense Information
  1. Given our simple procedure for incorporating lexical semantics into the parsing process, our hope is that this research will open the door to further gains using more sophisticated parsing models and richer semantic options.
    Page 2, “Introduction”
  2. As our baseline parsers, we use two state-of-the-art lexicalised parsing models , namely the Bikel parser (Bikel, 2004) and Charniak parser (Charniak, 2000).
    Page 2, “Background”
  3. While a detailed description of the respective parsing models is beyond the scope of this paper, it is worth noting that both parsers induce a context free grammar as well as a generative parsing model from a training set of parse trees, and use a development set to tune internal parameters.
    Page 2, “Background”
  4. (2005) experimented with first-sense and hypernym features from HowNet and CiLin (both WordNets for Chinese) in a generative parse model applied to the Chinese Penn Treebank.
    Page 3, “Background”
  5. Tighter integration of semantics into the parsing models , possibly in the form of discriminative reranking models (Collins and Koo, 2005; Chamiak and J ohn-son, 2005; McClosky et al., 2006), is a promising way forward in this regard.
    Page 8, “Discussion”

See all papers in Proc. ACL 2008 that mention parsing models.

See all papers in Proc. ACL that mention parsing models.

Back to top.

best results

Appears in 4 sentences as: best results (4)
In Improving Parsing and PP Attachment Performance with Sense Information
  1. The SFU representation produces the best results for Bikel (F-score 0.010 above baseline), while for Charniak the best performance is obtained with word+SF (F-score 0.007 above baseline).
    Page 6, “Results”
  2. For both parsers the best results are achieved with SFU, which was also the best configuration for parsing with Bikel.
    Page 6, “Results”
  3. Comparing the semantic representations, the best results are achieved with SFU, as we saw in the gold-standard PP-attachment case.
    Page 6, “Results”
  4. means that the best configuration for PP-attachment does not always produce the best results for parsing
    Page 8, “Discussion”

See all papers in Proc. ACL 2008 that mention best results.

See all papers in Proc. ACL that mention best results.

Back to top.

hypernym

Appears in 4 sentences as: hypernym (3) hypernyms (1)
In Improving Parsing and PP Attachment Performance with Sense Information
  1. (2005) experimented with first-sense and hypernym features from HowNet and CiLin (both WordNets for Chinese) in a generative parse model applied to the Chinese Penn Treebank.
    Page 3, “Background”
  2. The combination of word sense and first-level hypernyms produced a significant improvement over their basic model.
    Page 3, “Background”
  3. 1In WordNet 2.1, knife and scissors are sister synsets, both of which have TOOL as their 4th hypernym .
    Page 3, “Integrating Semantics into Parsing”
  4. Only by mapping them onto their lst hypernym or higher would we be able to capture the semantic generalisation alluded to above.
    Page 3, “Integrating Semantics into Parsing”

See all papers in Proc. ACL 2008 that mention hypernym.

See all papers in Proc. ACL that mention hypernym.

Back to top.

fine-grained

Appears in 4 sentences as: fine-grained (4)
In Improving Parsing and PP Attachment Performance with Sense Information
  1. The more fine-grained our semantic representation, the higher the average polysemy and the greater the need to distinguish between these senses.
    Page 3, “Integrating Semantics into Parsing”
  2. Disambiguating each word relative to its context of use becomes increasingly difficult for fine-grained representations (Palmer et al., 2006).
    Page 3, “Integrating Semantics into Parsing”
  3. We experiment with both full synsets and SFs as instances of fine-grained and coarse-grained semantic representation, respectively.
    Page 5, “Experimental setting”
  4. We hypothesise that this is due to the avoidance of excessive fragmentation, as occurs with fine-grained senses.
    Page 6, “Results”

See all papers in Proc. ACL 2008 that mention fine-grained.

See all papers in Proc. ACL that mention fine-grained.

Back to top.

sense disambiguation

Appears in 3 sentences as: sense disambiguation (3)
In Improving Parsing and PP Attachment Performance with Sense Information
  1. use of the most frequent sense, and an unsupervised word sense disambiguation (WSD) system.
    Page 2, “Introduction”
  2. This problem of identifying the correct sense of a word in context is known as word sense disambiguation (WSD: Agirre and Edmonds (2006)).
    Page 3, “Integrating Semantics into Parsing”
  3. The results of the previous section show that the improvements in parsing results are small but significant, for all three word sense disambiguation strategies (gold-standard, 1ST and ASR).
    Page 7, “Discussion”

See all papers in Proc. ACL 2008 that mention sense disambiguation.

See all papers in Proc. ACL that mention sense disambiguation.

Back to top.