Index of papers in Proc. ACL 2014 that mention
  • syntactic context
Tang, Duyu and Wei, Furu and Yang, Nan and Zhou, Ming and Liu, Ting and Qin, Bing
Abstract
Most existing algorithms for learning continuous word representations typically only model the syntactic context of words but ignore the sentiment of text.
Abstract
This is problematic for sentiment analysis as they usually map words with similar syntactic context but opposite sentiment polarity, such as good and bad, to neighboring word vectors.
Introduction
The most serious problem is that traditional methods typically model the syntactic context of words but ignore the sentiment information of text.
Related Work
(2011) introduce C&W model to learn word embedding based on the syntactic contexts of words.
Related Work
The C&W model learns word embedding by modeling syntactic contexts of words but ignoring sentiment information.
Related Work
By contrast, SSWEh and SSWET learn sentiment-specific word embedding by integrating the sentiment polarity of sentences but leaving out the syntactic contexts of words.
syntactic context is mentioned in 10 sentences in this paper.
Topics mentioned in this paper:
Hermann, Karl Moritz and Das, Dipanjan and Weston, Jason and Ganchev, Kuzman
Abstract
We present a novel technique for semantic frame identification using distributed representations of predicates and their syntactic context ; this technique leverages automatic syntactic parses and a generic set of word embeddings.
Abstract
Given labeled data annotated with frame-semantic parses, we learn a model that projects the set of word representations for the syntactic context around a predicate to a low dimensional representation.
Frame Identification with Embeddings
First, we extract the words in the syntactic context of runs; next, we concatenate their word embeddings as described in §2.2 to create an initial vector space representation.
Frame Identification with Embeddings
Formally, let cc represent the actual sentence with a marked predicate, along with the associated syntactic parse tree; let our initial representation of the predicate context be Suppose that the word embeddings we start with are of dimension n. Then 9 is a function from a parsed sentence cc to Rm“, where k is the number of possible syntactic context types.
Overview
We use word embeddings to represent the syntactic context of a particular predicate instance as a vector.
Overview
We could represent the syntactic context of runs as a vector with blocks for all the possible dependents warranted by a syntactic parser; for example, we could assume that positions 0 .
syntactic context is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Paperno, Denis and Pham, Nghia The and Baroni, Marco
Compositional distributional semantics
Another issue is that the same or similar items that occur in different syntactic contexts are assigned different semantic types with incomparable representations.
Compositional distributional semantics
Besides losing the comparability of the semantic contribution of a word across syntactic contexts , we also worsen the data sparseness issues.
The practical lexical function model
We may still want to represent word meanings in different syntactic contexts differently, but at the same time we need to incorporate a formal connection between those representations, e.g., between the transitive and the intransitive instantiations of the verb to eat.
The practical lexical function model
2To determine the number and ordering of matrices representing the word in the current syntactic context , our plf implementation relies on the syntactic type assigned to the word in the categorial grammar parse of the sentence.
The practical lexical function model
Table 4: The verb to eat associated to different sets of matrices in different syntactic contexts .
syntactic context is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Parikh, Ankur P. and Cohen, Shay B. and Xing, Eric P.
Abstract
Although a: and 33’ are not identical, it is likely that 293/ (2, 3) is similar to 233(1, 2) because the determiner and the noun appear in similar syntactic context .
Abstract
233/ (5, 7) also may be somewhat similar, but 233/ (2, 7) should not be very similar to 233(1, 2) because the noun and the determiner appear in a different syntactic context .
Abstract
The observation that the covariance matrices depend on local syntactic context is the main driving force behind our solution.
syntactic context is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Lei, Tao and Xin, Yu and Zhang, Yuan and Barzilay, Regina and Jaakkola, Tommi
Introduction
0 Our low dimensional embeddings are tailored to the syntactic context of words (head, modifier).
Results
More interestingly, we can consider the impact of syntactic context on the derived projections.
Results
The two bottom parts of the table demonstrate that how the projections change depending on the syntactic context of the word.
syntactic context is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: