Index of papers in PLOS Comp. Biol. that mention
  • principal components
Ickwon Choi, Amy W. Chung, Todd J. Suscovich, Supachai Rerks-Ngarm, Punnee Pitisuttithum, Sorachai Nitayaphan, Jaranit Kaewkungwal, Robert J. O'Connell, Donald Francis, Merlin L. Robb, Nelson L. Michael, Jerome H. Kim, Galit Alter, Margaret E. Ackerman, Chris Bailey-Kellogg
Discussion
To account for redundancy, we have used representative, common approaches including feature selection within the learning algorithm (via regularization), feature filtering (via feature clustering), and feature combination (via principal components analysis).
Supervised learning: Classification
Thus we also trained classifiers using the principal components as features.
Supervised learning: Classification
Each method was trained separately for each function with each of three different feature sets: the complete preprocessed set, the filtered set from the feature:feature clustering, and the set of principal components .
Supporting Information
Principal component analysis eigenvalue plot.
Unsupervised learning
As an alternative method to account for the possible redundancy among antibody features, a principal component analysis (PCA) was also performed.
Unsupervised learning
PCA yields a set of principal components (PCs) that represent the main patterns of variability of the antibody features across subjects.
Unsupervised learning
In contrast to the filtered features, the principal components are composites, and by inspecting their composition, we can see the patterns of concerted variation of the underlying antibody features.
principal components is mentioned in 16 sentences in this paper.
Topics mentioned in this paper:
Naoki Hiratani, Tomoki Fukai
Lateral inhibition enhances minor source detection by STDP
In Eq (2), if lateral inhibition is negligible (i.e., g2X/g1X = 0), all output neurons acquire the principal component of the response probability matrix Q, and the other information is neglected [7,40,41].
Neural Bayesian ICA and blind source separation
The response probability matrix Q and correlation matrix C are given as namics follows WX % g‘IX WX C, we may expect that synaptic weight vectors converge to the ei-genvectors of the principal components ; however, this was not the case in our simulations, even if we took into account the non-negativity of synaptic weights (see Fig 7B, where we renormalized the principal vectors to the region between 0 and 1).
Neural Bayesian ICA and blind source separation
This result implies that the network can extract independent sources, rather than principal components , from multiple intermixed inputs.
STDP and Bayesian ICA
First, output neurons were able to detect hidden external sources, without capturing principal components (Fig 7B).
STDP and Bayesian ICA
To perform a principal components analysis using neural units, the synaptic weight change needs to follow where LT[] means lower triangle matrix [@675].
STDP and Bayesian ICA
This LT transformation protects principal components caused by the lateral modification from higher order components; however in our model, because all output neurons receive the same number of inhibitory inputs Eq (2), all neurons are decorrelated with one another and develop into independent components.
principal components is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Christiaan A. de Leeuw, Joris M. Mooij, Tom Heskes, Danielle Posthuma
Analysis of CD data—gene analysis
First, the analyses were repeated with 10 principal components computed from the whole data set as covariates to correct for possible stratification.
Gene analysis
The gene analysis in MAGMA is based on a multiple linear principal components regression [18] model, using an F-test to compute the gene p-value.
Gene analysis
This model first projects the SNP matriX for a gene onto its principal components (PC), pruning away PCs with very small eigenval-ues, and then uses those PCs as predictors for the phenotype in the linear regression model.
Gene-set analysis
The gene density is defined as the ratio of effective gene size to the total number of SNPs in the gene, With the effective gene size in turn defined as the number of principal components that remain after pruning.
principal components is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Simon Sponberg, Thomas L. Daniel, Adrienne L. Fairhall
Analytical approach & synergies
The second synergy model is entirely data-driven: we extract the first principal component (tPCA—the PCA synergy model) of the two motor variables {tL, tR}.
Dimensionality reduction & feature extraction
We first applied standard principal component analysis (PCA) by computing the eigendecomposition of the covariance matrix of M alone for both the phase and spike-trig-gered ensembles.
PLS
The scores represent the amount of the leading feature represented in each wingstrokeThe motor output “loadings” vector 1)], defined as: is the first motor feature itself and is analogous to a principal component or eigenvector.
principal components is mentioned in 3 sentences in this paper.
Topics mentioned in this paper:
Jaldert O. Rombouts, Sander M. Bohte, Pieter R. Roelfsema
Saccade/antisaccade task
To gain a deeper understanding of the representation in the association layer that supports the nonlinear mapping from the sensory units to the Q-value units, we performed a principal component analysis (PCA) on the activations of the association units.
Saccade/antisaccade task
2E shows the projection of the activation vectors onto the first two principal components for an example network.
Saccade/antisaccade task
The color of the fixation point and the cue location provide information about the correct action and lead to a ‘split’ in the 2D principal component (PC) space.
principal components is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: