A Unified Semantic Representation | However, traditional forms of word sense disambiguation are difficult for short texts and single words because little or no contextual information is present to perform the disambiguation task. |
A Unified Semantic Representation | alignment-based sense disambiguation that leverages the content of the paired item in order to disambiguate each element. |
A Unified Semantic Representation | Leveraging the paired item enables our approach to disambiguate where traditional sense disambiguation methods can not due to insufficient context. |
Experiment 1: Textual Similarity | In addition, the system utilizes techniques such as Explicit Semantic Analysis (Gabrilovich and Markovitch, 2007) and makes use of resources such as Wiktionary and Wikipedia, a lexical substitution system based on supervised word sense disambiguation (Biemann, 2013), and a statistical machine translation system. |
Experiment 2: Word Similarity | Our alignment-based sense disambiguation transforms the task of comparing individual words into that of calculating the similarity of the best-matching sense pair across the two words. |
Related Work | However, unlike our approach, their method does not perform sense disambiguation prior to building the representation and therefore potentially suffers from ambiguity. |
Abstract | This paper proposes a novel smoothing model with a combinatorial optimization scheme for all-words word sense disambiguation from untagged corpora. |
Discussion | This means that the ranks of sense candidates for each word were frequently altered through iteration, which further means that some new information not obtained earlier was delivered one after another to sense disambiguation for each word. |
Discussion | From these results, we could confirm the expected sense-interdependency effect that a sense disambiguation of certain word affected to other words. |
Discussion | In our method, sense disambiguation of a word is guided by its nearby words’ extrapolation (smoothing). |
Introduction | Word Sense Disambiguation (WSD) is a task to identify the intended sense of a word based on its context. |
Introduction | (2002) observed, the domain of the text that a word occurs in is a useful signal for performing word sense disambiguation (e.g. |
Introduction | We operate under the framework of phrase sense disambiguation (Carpuat and Wu, 2007), in which we take automatically align parallel data in an old domain to generate an initial old-domain sense inventory. |
New Sense Indicators | Towards this end, first, we pose the problem as a phrase sense disambiguation (PSD) problem over the known sense inventory. |
Related Work | While word senses have been studied extensively in lexical semantics, research has focused on word sense disambiguation , the task of disambiguating words in context given a predefined sense inventory (e.g., Agirre and Edmonds (2006)), and word sense induction, the task of learning sense inventories from text (e. g., Agirre and Soroa (2007)). |
Related Work | Chan and Ng (2007) notably show that detecting changes in predominant sense as modeled by domain sense priors can improve sense disambiguation , even after performing adaptation using active learning. |
Introduction | Several NLP tasks, such as word sense disambiguation , word sense induction, and named entity disambiguation, address this ambiguity problem to varying degrees. |
Related Work | the well studied problems of named entity disambiguation (NED) and word sense disambiguation (WSD). |
Related Work | Both named entity and word sense disambiguation are extensively studied, and surveys on each are available (Nadeau and Sekine, 2007; Navigli, 2009). |
Conclusions | Beyond the immediate usability of its output and its effective use for domain Word Sense Disambiguation (Faralli and Navigli, 2012), we wish to show the benefit of GlossBoot in gloss-driven approaches to ontology learning (Navigli et al., 2011; Velardi et al., 2013) and semantic network enrichment (Navigli and Ponzetto, 2012). |
Introduction | Interestingly, electronic glossaries have been shown to be key resources not only for humans, but also in Natural Language Processing (NLP) tasks such as Question Answering (Cui et al., 2007), Word Sense Disambiguation (Duan and Yates, 2010; Faralli and Navigli, 2012) and ontology learning (Navigli et al., 2011; Velardi et al., 2013). |
Related Work | and Curran, 2008; McIntosh and Curran, 2009), learning semantic relations (Pantel and Pennac-chiotti, 2006), extracting surface text patterns for open-domain question answering (Ravichandran and Hovy, 2002), semantic tagging (Huang and Riloff, 2010) and unsupervised Word Sense Disambiguation (Yarowsky, 1995). |
Abstract | Current approaches for word sense disambiguation and translation selection typically require lexical resources or large bilingual corpora with rich information fields and annotations, which are often infeasible for under-resourced languages. |
Introduction | Word sense disambiguation (WSD) is the task of assigning sense tags to ambiguous lexical items (Us) in a text. |
Introduction | It can also be viewed as a simplified version of the Cross-Lingual Lexical Substitution (Mihalcea et al., 2010) and Cross-Lingual Word Sense Disambiguation (Lefever and Hoste, 2010) tasks, as defined in SemEval-2010. |
Clustering for Sentiment Analysis | by using automatic/manual sense disambiguation techniques. |
Discussions | The sense disambiguation accuracy of the same would have lowered in a cross-domain setting. |
Introduction | WordNets are primarily used to address the problem of word sense disambiguation . |