Index of papers in April 2015 that mention
  • feature selection
Ickwon Choi, Amy W. Chung, Todd J. Suscovich, Supachai Rerks-Ngarm, Punnee Pitisuttithum, Sorachai Nitayaphan, Jaranit Kaewkungwal, Robert J. O'Connell, Donald Francis, Merlin L. Robb, Nelson L. Michael, Jerome H. Kim, Galit Alter, Margaret E. Ackerman, Chris Bailey-Kellogg
Discussion
To account for redundancy, we have used representative, common approaches including feature selection within the learning algorithm (via regularization), feature filtering (via feature clustering), and feature combination (via principal components analysis).
Supervised learning: Classification
Furthermore, in order to assess the effect of reducing redundancy and focusing on the most interpretable feature contributions, three different sets of input features were considered: the complete set (20 features: 4 subclasses * 5 antigens), the filtered set with one feature selected from each cluster based on correlation with function (6 features), and the PC features (7 leading PCs), as illustrated in Fig 2.
Supervised learning: Classification
Though the goal of this study was not to comprehensively and rigorously assess feature selection methods, which would require further subsampling the data, we did investigate the sensitivity of the cluster-based filtering to our use of the features within each cluster that had the highest PCC.
Supervised learning: Classification
The PC features selected for the cytokines are more consistent with the other feature sets, with PC2 (IgG2/4 vs. 1/3) modulated by PC6 (V1V2), along with an IgG4.V1V2 down-selection via PC7.
Supervised learning: Regression
While the disappointing performance of the more sophisticated methods could potentially be improved by custom feature selection methods or parameter tuning, our goal here is not to provide such a benchmark but rather to establish the general scheme of predictive modeling of antibody feature: function relationships.
feature selection is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Naoki Hiratani, Tomoki Fukai
Excitatory and inhibitory STDP cooperatively shape structured lateral connections
From a linear analysis, we can expect that When gY1 is positive, E-to-I connections tend to be feature selective (see Eq (35) in Methods).
Excitatory and inhibitory STDP cooperatively shape structured lateral connections
We can evaluate feature selectivity of inhibitory neurons by where QYA and QYB are the sets of excitatory neurons responding preferentially to sources A and B, respectively.
Excitatory and inhibitory STDP cooperatively shape structured lateral connections
Indeed, when the LTD time window is narrow, analytically calculated ng tends to take negative values (the green line in Fig 6A), and E-to-I connections organized in the simulation are not feature selective (the blue points in Fig 6A).
P P ji) : vSGf(W;§)Zqiuqlu7 : VSG§(WJ§)ZquqZfl7 “=1 M=1 < 1-,.) 2[ dsF<w;§,s>[ drax<r>[ dt'¢<t'>¢<t' — (r — s + 2%)), —oo 0 max(0,r—s+2dXd) < 1,) 2[ dsF<w;§,s>[0 dram 0 dqay<q>[0 dr'am
In this approximation, we additionally assume that w and the eigenvalue is es lY — Because the eigenvector develops by eXp[ch1Y — Wfift], When ng is positive, the E-to-I connections are more likely to be structured in a way that the inhibitory neurons become feature selective .
STDP in E-to-I and I-to-E connections
In our model, although inhibitory neurons are not directly projected from input sources, as excitatory neurons learn a specific input source (Fig 5D, left panel), inhibitory neurons acquire feature selectivity through Hebbian STDP at synaptic connections from those excitatory neurons (Fig 5D, middle panel).
feature selection is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Mei Zhan, Matthew M. Crane, Eugeni V. Entchev, Antonio Caballero, Diana Andrea Fernandes de Abreu, QueeLim Ch’ng, Hang Lu
Bright-Field Head Identification
Second, in the feature selection step, distinct mathematical descriptors that may help to describe and distinguish the structure of interest are calculated for each layer of classification.
Bright-Field Head Identification
However, in addition to informative feature selection and the curation of a representative training set, the performance of SVM classification models is subject to several parameters associated with the model itself and its kernel function [34, 48].
Discussion
Finally, while utility of our framework will require feature selection and training for each particular application, the modularity and architecture of our framework permits aspects of the specific tools we have developed here to be reused.
feature selection is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: