Index of papers in Proc. ACL that mention
  • syntactic context
Jijkoun, Valentin and de Rijke, Maarten and Weerkamp, Wouter
Generating Topic-Specific Lexicons
Extract all syntactic contexts of clue words
Generating Topic-Specific Lexicons
3.1 Step 1: Extracting syntactic contexts
Generating Topic-Specific Lexicons
First, we identify syntactic contexts in which specific clue words can be used to express
Qualitative Analysis of Lexicons
Because our topic-specific lexicons consist of triples (clue word, syntactic context , target), they actually contain more words than topic-independent lexicons of the same size, but topic-specific entries are more selective, which makes the lexicon more focused.
Quantitative Evaluation of Lexicons
D: the number of syntactic contexts per clue
syntactic context is mentioned in 12 sentences in this paper.
Topics mentioned in this paper:
Tang, Duyu and Wei, Furu and Yang, Nan and Zhou, Ming and Liu, Ting and Qin, Bing
Abstract
Most existing algorithms for learning continuous word representations typically only model the syntactic context of words but ignore the sentiment of text.
Abstract
This is problematic for sentiment analysis as they usually map words with similar syntactic context but opposite sentiment polarity, such as good and bad, to neighboring word vectors.
Introduction
The most serious problem is that traditional methods typically model the syntactic context of words but ignore the sentiment information of text.
Related Work
(2011) introduce C&W model to learn word embedding based on the syntactic contexts of words.
Related Work
The C&W model learns word embedding by modeling syntactic contexts of words but ignoring sentiment information.
Related Work
By contrast, SSWEh and SSWET learn sentiment-specific word embedding by integrating the sentiment polarity of sentences but leaving out the syntactic contexts of words.
syntactic context is mentioned in 10 sentences in this paper.
Topics mentioned in this paper:
Xiong, Deyi and Zhang, Min and Aw, Aiti and Li, Haizhou
Analysis
By allowing appropriate violations to translate non-syntactic phrases according to particular syntactic contexts , our SDB model better inherits the strength of phrase-based approach than XP+.
Introduction
whether the current phrase can be translated as a unit or not within particular syntactic contexts (Fox, 2002)2, than that of constituent matching/violation.
Introduction
It is able to reward non-syntactic translations by assigning an adequate probability to them if these translations are appropriate to particular syntactic contexts on the source side, rather than always punish them.
The Syntax-Driven Bracketing Model 3.1 The Model
We consider this task as a binary-class classification problem: whether the current source phrase s is bracketable (1)) within particular syntactic contexts (7(3)).
The Syntax-Driven Bracketing Model 3.1 The Model
If two neighboring sub-phrases 31 and 32 are given, we can use more inner syntactic contexts to complete this binary classification task.
The Syntax-Driven Bracketing Model 3.1 The Model
new feature into the log-linear translation model: PSDB (b|T, This feature is computed by the SDB model described in equation (3) or equation (4), which estimates a probability that a source span is to be translated as a unit within particular syntactic contexts .
syntactic context is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Hermann, Karl Moritz and Das, Dipanjan and Weston, Jason and Ganchev, Kuzman
Abstract
We present a novel technique for semantic frame identification using distributed representations of predicates and their syntactic context ; this technique leverages automatic syntactic parses and a generic set of word embeddings.
Abstract
Given labeled data annotated with frame-semantic parses, we learn a model that projects the set of word representations for the syntactic context around a predicate to a low dimensional representation.
Frame Identification with Embeddings
First, we extract the words in the syntactic context of runs; next, we concatenate their word embeddings as described in §2.2 to create an initial vector space representation.
Frame Identification with Embeddings
Formally, let cc represent the actual sentence with a marked predicate, along with the associated syntactic parse tree; let our initial representation of the predicate context be Suppose that the word embeddings we start with are of dimension n. Then 9 is a function from a parsed sentence cc to Rm“, where k is the number of possible syntactic context types.
Overview
We use word embeddings to represent the syntactic context of a particular predicate instance as a vector.
Overview
We could represent the syntactic context of runs as a vector with blocks for all the possible dependents warranted by a syntactic parser; for example, we could assume that positions 0 .
syntactic context is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Goldberg, Yoav and Tsarfaty, Reut
A Generative PCFG Model
Our use of an unweighted lattice reflects our belief that all the segmentations of the given input sentence are a-priori equally likely; the only reason to prefer one segmentation over the another is due to the overall syntactic context which is modeled via the PCFG derivations.
Discussion and Conclusion
The overall performance of our joint framework demonstrates that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperforms upper bounds proposed by previous joint disambiguation systems and achieves segmentation and parsing results on a par with state-of-the-art standalone applications results.
Introduction
Tsarfaty (2006) argues that for Semitic languages determining the correct morphological segmentation is dependent on syntactic context and shows that increasing information sharing between the morphological and the syntactic components leads to improved performance on the joint task.
Model Preliminaries
We suggest that in unlexicalized PCFGs the syntactic context may be explicitly modeled in the derivation probabilities.
Results and Analysis
Yet we note that the better grammars without pruning outperform the poorer grammars using this technique, indicating that the syntactic context aids, to some extent, the disambiguation of unknown tokens.
syntactic context is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Paperno, Denis and Pham, Nghia The and Baroni, Marco
Compositional distributional semantics
Another issue is that the same or similar items that occur in different syntactic contexts are assigned different semantic types with incomparable representations.
Compositional distributional semantics
Besides losing the comparability of the semantic contribution of a word across syntactic contexts , we also worsen the data sparseness issues.
The practical lexical function model
We may still want to represent word meanings in different syntactic contexts differently, but at the same time we need to incorporate a formal connection between those representations, e.g., between the transitive and the intransitive instantiations of the verb to eat.
The practical lexical function model
2To determine the number and ordering of matrices representing the word in the current syntactic context , our plf implementation relies on the syntactic type assigned to the word in the categorial grammar parse of the sentence.
The practical lexical function model
Table 4: The verb to eat associated to different sets of matrices in different syntactic contexts .
syntactic context is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Parikh, Ankur P. and Cohen, Shay B. and Xing, Eric P.
Abstract
Although a: and 33’ are not identical, it is likely that 293/ (2, 3) is similar to 233(1, 2) because the determiner and the noun appear in similar syntactic context .
Abstract
233/ (5, 7) also may be somewhat similar, but 233/ (2, 7) should not be very similar to 233(1, 2) because the noun and the determiner appear in a different syntactic context .
Abstract
The observation that the covariance matrices depend on local syntactic context is the main driving force behind our solution.
syntactic context is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Lei, Tao and Xin, Yu and Zhang, Yuan and Barzilay, Regina and Jaakkola, Tommi
Introduction
0 Our low dimensional embeddings are tailored to the syntactic context of words (head, modifier).
Results
More interestingly, we can consider the impact of syntactic context on the derived projections.
Results
The two bottom parts of the table demonstrate that how the projections change depending on the syntactic context of the word.
syntactic context is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: