Discussion | To account for redundancy, we have used representative, common approaches including feature selection within the learning algorithm (via regularization), feature filtering (via feature clustering), and feature combination (via principal components analysis). |
Supervised learning: Classification | Thus we also trained classifiers using the principal components as features. |
Supervised learning: Classification | Each method was trained separately for each function with each of three different feature sets: the complete preprocessed set, the filtered set from the feature:feature clustering, and the set of principal components . |
Supporting Information | Principal component analysis eigenvalue plot. |
Unsupervised learning | As an alternative method to account for the possible redundancy among antibody features, a principal component analysis (PCA) was also performed. |
Unsupervised learning | PCA yields a set of principal components (PCs) that represent the main patterns of variability of the antibody features across subjects. |
Unsupervised learning | In contrast to the filtered features, the principal components are composites, and by inspecting their composition, we can see the patterns of concerted variation of the underlying antibody features. |
Lateral inhibition enhances minor source detection by STDP | In Eq (2), if lateral inhibition is negligible (i.e., g2X/g1X = 0), all output neurons acquire the principal component of the response probability matrix Q, and the other information is neglected [7,40,41]. |
Neural Bayesian ICA and blind source separation | The response probability matrix Q and correlation matrix C are given as namics follows WX % g‘IX WX C, we may expect that synaptic weight vectors converge to the ei-genvectors of the principal components ; however, this was not the case in our simulations, even if we took into account the non-negativity of synaptic weights (see Fig 7B, where we renormalized the principal vectors to the region between 0 and 1). |
Neural Bayesian ICA and blind source separation | This result implies that the network can extract independent sources, rather than principal components , from multiple intermixed inputs. |
STDP and Bayesian ICA | First, output neurons were able to detect hidden external sources, without capturing principal components (Fig 7B). |
STDP and Bayesian ICA | To perform a principal components analysis using neural units, the synaptic weight change needs to follow where LT[] means lower triangle matrix [@675]. |
STDP and Bayesian ICA | This LT transformation protects principal components caused by the lateral modification from higher order components; however in our model, because all output neurons receive the same number of inhibitory inputs Eq (2), all neurons are decorrelated with one another and develop into independent components. |
Analysis of CD data—gene analysis | First, the analyses were repeated with 10 principal components computed from the whole data set as covariates to correct for possible stratification. |
Gene analysis | The gene analysis in MAGMA is based on a multiple linear principal components regression [18] model, using an F-test to compute the gene p-value. |
Gene analysis | This model first projects the SNP matriX for a gene onto its principal components (PC), pruning away PCs with very small eigenval-ues, and then uses those PCs as predictors for the phenotype in the linear regression model. |
Gene-set analysis | The gene density is defined as the ratio of effective gene size to the total number of SNPs in the gene, With the effective gene size in turn defined as the number of principal components that remain after pruning. |
Analytical approach & synergies | The second synergy model is entirely data-driven: we extract the first principal component (tPCA—the PCA synergy model) of the two motor variables {tL, tR}. |
Dimensionality reduction & feature extraction | We first applied standard principal component analysis (PCA) by computing the eigendecomposition of the covariance matrix of M alone for both the phase and spike-trig-gered ensembles. |
PLS | The scores represent the amount of the leading feature represented in each wingstrokeThe motor output “loadings” vector 1)], defined as: is the first motor feature itself and is analogous to a principal component or eigenvector. |