Index of papers in Proc. ACL 2014 that mention
  • SVD
Bollegala, Danushka and Weir, David and Carroll, John
Distribution Prediction
To reduce the dimensionality of the feature space, and create dense representations for words, we perform SVD on F. We use the left singular vectors corresponding to the k largest singular values to compute a rank k approximation F, of F. We perform truncated SVD using SVDLIBCZ.
Experiments and Results
The number of singular vectors k selected in SVD , and the number of PLSR dimensions L are set respectively to 1000 and 50 for the remainder of the experiments described in the paper.
Introduction
tent feature spaces separately for the source and the target domains using Singular Value Decomposition ( SVD ).
Introduction
The SVD smoothing in the first step both reduces the data sparseness in distributional representations of individual words, as well as the dimensionality of the feature space, thereby enabling us to efficiently and accurately learn a prediction model using PLSR in the second step.
O \
To evaluate the overall effect of the number of singular vectors k used in the SVD step, and the number of PLSR components L used in Algorithm 1, we conduct two experiments.
O \
2000 SVD dimensions
O \
Figure 3: The effect of SVD dimensions.
Related Work
Linear predictors are then learnt to predict the occurrence of those pivots, and SVD is used to construct a lower dimensional representation in which a binary classifier is trained.
SVD is mentioned in 9 sentences in this paper.
Topics mentioned in this paper:
Silberer, Carina and Lapata, Mirella
Experimental Setup
Specifically, we concatenate the textual and visual vectors and project them onto a lower dimensional latent space using SVD (Golub and Reinsch, 1970).
Experimental Setup
We furthermore report results obtained with Bruni et al.’s (2014) bimodal distributional model, which employs SVD to integrate co-occurrence-based textual representations with visual repre-
Experimental Setup
McRae 0.71 0.49 0.68 0.58 0.52 0.62 Attributes 0.58 0.61 0.68 0.46 0.56 0.58 SAE 0.65 0.60 0.70 0.52 0.60 0.64 SVD — — 0.67 — — 0.57 kCCA — — 0.57 — — 0.55 Bruni — — 0.52 — — 0.46 RNN-640 0.41 — — 0.34 — —
Results
The automatically obtained textual and visual attribute vectors serve as input to SVD , kCCA, and our stacked autoencoder (SAE).
Results
McRae 0.52 0.31 0.42 Attributes 0.35 0.37 0.33 SAE 0.36 0.35 0.43 SVD — — 0.39 kCCA — — 0.37 Bruni — — 0.34 RNN-640 0.32 — —
Results
We also observe that simply concatenating textual and visual attributes (Attributes, T+V) performs competitively with SVD and better than kCCA.
SVD is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Fyshe, Alona and Talukdar, Partha P. and Murphy, Brian and Mitchell, Tom M.
Data
SVD was applied to the document and dependency statistics and the top 1000 dimensions of each type were retained.
Experimental Results
The SVD matrix for the original corpus data has correlation 0.4279 to the behavioral data, also below the 95 % confidence interval for all J NNSE models.
Experimental Results
_ SVD (Text)
Experimental Results
J NNSE(fMRI+Text) data performed on average 6% better than the best NNSE(Text), and exceeding even the original SVD corpus representations while maintaining interpretability.
Introduction
Typically, VSMs are created by collecting word usage statistics from large amounts of text data and applying some dimensionality reduction technique like Singular Value Decomposition ( SVD ).
SVD is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Lazaridou, Angeliki and Bruni, Elia and Baroni, Marco
Experimental Setup
Singular Value Decomposition (SVD) SVD is the most widely used dimensionality reduction technique in distributional semantics (Turney and Pantel, 2010), and it has recently been exploited to combine visual and linguistic dimensions in the multimodal distributional semantic model of Bruni et al.
Experimental Setup
SVD smoothing is also a way to infer values of unseen dimensions in partially incomplete matrices, a technique that has been applied to the task of inferring word tags of unanno-tated images (Hare et al., 2008).
Experimental Setup
Assuming that the concept-representing rows of V8 and W8 are ordered in the same way, we apply the (k-truncated) SVD to the concatenated matrix [VSWS], such that [VSWS] = U192 kzgf is a k-rank approximation of the original matrix.6 The projection function is then:
Results
k Model 1 2 3 5 10 20 Chance 1.1 2.2 3.3 5.5 11.0 22.0 SVD 1.9 5.0 8.1 14.5 29.0 48.6 CCA 3.0 6.9 10.7 17.9 31.7 51.7 lin 2.4 6.4 10.5 18.7 33.0 55.0 NN 3.9 6.6 10.6 21.9 37.9 58.2
Results
For the SVD model, we set the number of dimensions to 300, a common choice in distributional semantics, coherent with the settings we used for the visual and linguistic spaces.
Results
Surprisingly, the very simple lin method outperforms both CCA and SVD .
SVD is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Baroni, Marco and Dinu, Georgiana and Kruszewski, Germán
Distributional semantic models
However, it is worth pointing out that the evaluated parameter subset encompasses settings (narrow context window, positive PMI, SVD reduction) that have been
Results
For the count models, PMI is clearly the better weighting scheme, and SVD outperforms NMF as a dimensionality reduction technique.
Results
PMI SVD 500 42 PMI SVD 400 46 PMI SVD 500 47 PMI SVD 300 50 PMI SVD 400 5 1 PMI NMF 300 5 2 PMI NMF 400 53 PMI SVD 300 53
SVD is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Fan, Miao and Zhao, Deli and Zhou, Qiang and Liu, Zhiyuan and Zheng, Thomas Fang and Chang, Edward Y.
Algorithm
We perform the singular value decomposition ( SVD ) (Golub and Kahan, 1965) for A at first, and then cut down each singular value.
Algorithm
Shrinkage step: UEVT = SVD (A), Z = U max(2 — TZMO) VT. end while end foreach
Algorithm
Shrinkage step: UEVT = SVD (A), Z = U max(2 — TZMO) VT.
SVD is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: