Index of papers in Proc. ACL that mention
  • IME
Pantel, Patrick and Lin, Thomas and Gamon, Michael
Evaluation Methodology
For Model IM , we varied the number of user intents (K) in intervals from 100 to 400 (see Figure 3), under the assumption that multiple intents would eXist per entity type.
Experimental Results
Further modeling the user intent in Model IM results in significantly better performance over all models and across all metrics.
Experimental Results
Model IM shows its biggest gains in the first position of its ranking as evidenced by the Prec©1 metric.
Experimental Results
Table 2 reports results for Model IM using K = 200 user intents.
Joint Model of Types and User Intents
3.1 Intent-based Model ( IM )
Joint Model of Types and User Intents
In this section we describe our main model, IM , illustrated in Figure 1.
Joint Model of Types and User Intents
Table 1: Model IM : Generative process for entity-bearing queries.
IME is mentioned in 19 sentences in this paper.
Topics mentioned in this paper:
Jia, Zhongye and Zhao, Hai
Abstract
It is very import for Chinese language processing with the aid of an efficient input method engine ( IME ), of which pinyin-to-Chinese (PTC) conversion is the core part.
Abstract
Meanwhile, though typos are inevitable during user pinyin inputting, existing IMEs paid little attention to such big inconvenience.
Abstract
In this paper, motivated by a key equivalence of two decoding algorithms, we propose a joint graph model to globally optimize PTC and typo correction for IME .
Introduction
The daily life of Chinese people heavily depends on Chinese input method engine ( IME ), no matter whether one is composing an Email, writing an article, or sending a text message.
Introduction
However, every Chinese word inputted into computer or cellphone cannot be typed through one-to-one mapping of key-to-letter inputting directly, but has to go through an IME as there are thousands of Chinese characters for inputting while only 26 letter keys are available in the keyboard.
Introduction
An IME is an essential software interface that maps Chinese characters into English letter combinations.
IME is mentioned in 33 sentences in this paper.
Topics mentioned in this paper:
Tsvetkov, Yulia and Boytsov, Leonid and Gershman, Anatole and Nyberg, Eric and Dyer, Chris
Methodology
We define three main feature categories (1) abstractness and imageability , (2) supersenses, (3) unsupervised vector-space word representations; each category corresponds to a group of features with a common theme and representation.
Methodology
0 Abstractness and imageability .
Methodology
Abstractness and imageability were shown to be useful in detection of metaphors (it is easier to invoke mental pictures of concrete and imageable words) (Turney et al., 2011; Broadwell et al., 2013).
Model and Feature Extraction
Abstractness and imageability .
Model and Feature Extraction
The MRC psycholinguistic database is a large dictionary listing linguistic and psycholinguistic attributes obtained experimentally (Wilson, 1988).10 It includes, among other data, 4,295 words rated by the degrees of abstractness and 1,156 words rated by the imageability .
Model and Feature Extraction
(2013), we use a logistic regression classifier to propagate abstractness and imageability scores from MRC ratings to all words for which we have vector space representations.
IME is mentioned in 18 sentences in this paper.
Topics mentioned in this paper:
Mehdad, Yashar and Carenini, Giuseppe and Ng, Raymond T.
Experimental Setup
F: umm, I’m afraid apparant non—sequiturs are always a hazard of doing summaries ;—)
Experimental Setup
E: I’m just convulsing my thoughts to the irc log
Experimental Setup
umm, I’m afraid apparant non—sequiturs are always a hazard of doing summaries ;-)
Introduction
James had not ever had use for something like that so I’m not sure where I would graft that in.
Introduction
James said that I’m thinking about moving that to on—activation instead of on—startup anyway as it should still work for a main form - but i still wonder if the on—startup parameter issue should be considered a bug — as it shouldn’t choke.
Phrasal Query Abstraction Framework
- i’m willing to scrap it if there is a better schema hidden in gnue somewhere :)
IME is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Li, Jianguo and Brew, Chris
Experiment Setup 4.1 Corpus
For example, Schulte im Walde (2000) uses 153 verbs in 30 classes, and Joanis et al.
Integration of Syntactic and Lexical Information
However, some of the functions words, prepositions in particular, are known to carry great amount of syntactic information that is related to lexical meanings of verbs (Schulte im Walde, 2003; Brew and Schulte im Walde, 2002; J oanis et al., 2007).
Related Work
It is therefore unsurprising that much work on verb classification has adopted them as features (Schulte im Walde, 2000; Brew and Schulte im Walde, 2002; Korhonen et al., 2003).
Related Work
Trying to overcome the problem of data sparsity, Schulte im Walde (2000) explores the additional use of selectional preference features by augmenting each syntactic slot with the concept to which its head noun belongs in an ontology (e.g.
Related Work
Although the problem of data sparsity is alleviated to certain extent (3), these features do not generally improve classification performance (Schulte im Walde, 2000; J oanis, 2002).
IME is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Croce, Danilo and Moschitti, Alessandro and Basili, Roberto and Palmer, Martha
Model Analysis and Discussion
( IM (VB (target))(OBJ))
Model Analysis and Discussion
(VC(VB (target))(OBJ)) (VC(VBG(target))(OBJ)) (OPRD(TO)( IM (VB(target))(OBJ))) (PMOD(VBG(target))(OBJ))
Model Analysis and Discussion
(PRP(TO)( IM (VB (target))(OBJ)))
IME is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Gyawali, Bikash and Gardent, Claire
Experimental Setup
We compare the results obtained with those obtained by two other systems participating in the KBGen challenge, namely the UDEL system, a symbolic rule based system developed by a group of students at the University of Delaware; and the IMS system, a statistical system using a probabilistic grammar induced from the training data.
Results and Discussion
System All Covered Coverage # Trees IMS 0.12 0.12 100%
Results and Discussion
While both the IMS and the UDEL system have full coverage, our BASE system strongly un-dergenerates failing to account for 69.5% of the test data.
Results and Discussion
In terms of BLEU score, the best version of our system (AUTEXP) outperforms the probabilistic approach of IMS by a large margin (+0.17) and produces results similar to the fully handcrafted UDEL system (-().
IME is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Berant, Jonathan and Dagan, Ido and Goldberger, Jacob
Learning Entailment Graph Edges
Thus, P(Fm,|G) = P(Fm, |Im ,).
Learning Entailment Graph Edges
P(G) = HWEU P( Im ,).
Learning Entailment Graph Edges
First, Snow et al.’s model attempts to determine the graph that maximizes the likelihood P and not the posterior P(G Therefore, their model contains an edge prior P( Im ,) that has to be estimated, whereas in our model it cancels out.
IME is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Poon, Hoifung
Grounded Unsupervised Semantic Parsing
arrivaLt ime ).
Grounded Unsupervised Semantic Parsing
departure_t ime or ticket price fare .
Grounded Unsupervised Semantic Parsing
departure_t ime , and so the node state P : flight .
IME is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Reichart, Roi and Korhonen, Anna
Introduction
(O’Donovan et al., 2005; Schulte im Walde, 2006; Erk, 2007; Preiss et al., 2007; Van de Cruys, 2009; Reisinger and Mooney, 2011; Sun and Korhonen, 2011; Lippincott et al., 2012).
Introduction
Schulte im Walde et al.
Previous Work
K—means and spectral) algorithms (Schulte im Walde, 2006; Joanis et al., 2008; Sun et al., 2008; Li and Brew, 2008; Korhonen et al., 2008; Sun and Korhonen, 2009; Vlachos et al., 2009; Sun and Korhonen, 2011).
Previous Work
Finally, the model of Schulte im Walde et a1.
IME is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Wang, Aobo and Kan, Min-Yen
Methodology
0 Character 3-gram: Cka+1Ck+2(i — 3 < k< i+m
Methodology
o Ime ;n(i—4 < m < n < z’+4,0 < nm < 5) matches one entry in the Peking University dictionary:
Methodology
o (*) Ime ;n(i—4 < m < n < z’+4,0 < n — m < 5) matches one entry in the informal word list:
IME is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Kiela, Douwe and Hill, Felix and Korhonen, Anna and Clark, Stephen
Experimental Approach
Previous NLP-related work uses SIF T (Feng and Lapata, 2010; Bruni et al., 2012) or SURF (Roller and Schulte im Walde, 2013) descriptors for identifying points of interest in an image, quantified by 128-dimensional local descriptors.
Experimental Approach
The USP norms have been used in many previous studies to evaluate semantic representations (Andrews et al., 2009; Feng and Lapata, 2010; Silberer and Lapata, 2012; Roller and Schulte im Walde, 2013).
Introduction
Such models extract information about the perceptible characteristics of words from data collected in property norming experiments (Roller and Schulte im Walde, 2013; Silberer and Lapata, 2012) or directly from ‘raw’ data sources such as images (Feng and Lapata, 2010; Bruni et al., 2012).
Introduction
Multi-modal models outperform language-only models on a range of tasks, including modelling conceptual association and predicting com-positionality (Bruni et al., 2012; Silberer and Lapata, 2012; Roller and Schulte im Walde, 2013).
IME is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Schulte im Walde, Sabine and Hying, Christian and Scheible, Christian and Schmid, Helmut
Introduction
Up to now, such classifications have been used in applications such as word sense disambiguation (Dorr and Jones, 1996; Kohomban and Lee, 2005), machine translation (Prescher et al., 2000; Koehn and Hoang, 2007), document classification (Klavans and Kan, 1998), and in statistical lexical acquisition in general (Rooth et al., 1999; Merlo and Stevenson, 2001; Korhonen, 2002; Schulte im Walde, 2006).
Related Work
Two large-scale approaches of this kind are Schulte im Walde (2006), who used k-Means on verb subcategorisation frames and verbal arguments to cluster verbs semantically, and J oanis et al.
Related Work
To the best of our knowledge, Schulte im Walde (2006) is the only hard-clustering approach that previously incorporated selectional preferences as verb features.
IME is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Wang, Lu and Cardie, Claire
Conclusion
SVM-DA: and um Im not sure about the buttons being in the shape of fruit though.
Introduction
A: and um I’m not sure about the buttons being in the shape of fruit though.
Introduction
D: Um like I’m just thinking bright colours.
IME is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Weller, Marion and Fraser, Alexander and Schulte im Walde, Sabine
Conclusion
This work was funded by the DFG Research Project Distributional Approaches to Semantic Relatedness (Marion Weller), the DFG Heisenberg Fellowship SCHU-25 80/ 1-1 (Sabine Schulte im Walde), as well as by the Deutsche Forschungsge-meinschaft grant Models of Morphosyntax for Statistical Machine Translation (Alexander Fraser).
Experiments and evaluation
(2013); the newspaper data (HGC - Huge German Corpus) was parsed with Schmid (2000), and subcategorization information was extracted as described in Schulte im Walde (2002b).
Using subcategorization information
Briscoe and Carroll (1997) for English; Sarkar and Zeman (2000) for Czech; Schulte im Walde (2002a) for German; Messiant (2008) for French.
IME is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Kawahara, Daisuke and Peterson, Daniel W. and Palmer, Martha
Introduction
Most of these approaches assume that all target verbs are monosemous (Stevenson and Joanis, 2003; Schulte im Walde, 2006; Joanis et al., 2008; Li and Brew, 2008; Sun et al., 2008; Sun and Korhonen, 2009; Vlachos et al., 2009; Parisien and Stevenson, 2010; Parisien and Stevenson, 2011; Falk et al., 2012; Lippincott et al., 2012; Reichart and Korhonen, 2013; Sun et al., 2013).
Introduction
Moreover, to the best of our knowledge, none of the following approaches attempt to quantitatively evaluate soft clusterings of verb classes induced by polysemy-aware unsupervised approaches (Korhonen et al., 2003; Lapata and Brew, 2004; Li and Brew, 2007; Schulte im Walde et al., 2008).
Related Work
Schulte im Walde et al.
IME is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: