Index of papers in March 2015 that mention
  • feature selection
Davide Albanese, Carlotta De Filippo, Duccio Cavalieri, Claudio Donati
Introduction
Comparison of the performances of the method to popular feature selection and classification algorithms shows that or strategy is effective in identifying microbial clades associated to the different sample groups, providing a novel analysis method for targeted metagenomic datasets.
Predictive classification pipeline
We compared the predictive performance of PhyloRelief with the Random Forest classifier (PhyloRelief +RF) to LEfSe + RF, MetaPhyl (without feature selection) and Random Forest used as both classifier and feature selection method (RF + RF).
Predictive classification pipeline
In order to avoid overfitting and selection bias effects, the feature selection procedure was included in the cross validation loop [40,41].
Predictive classification pipeline
In the case of LEfSe + RF, LEfSe was treated as feature selection method using the common p-value threshold of 0.05.
Predictivity of the ranked features in supervised classification problems
The Random Forest (RF) classifier was recently proven to be the most effective in this class of problems [26,27] , both for feature selection and classification.
Predictivity of the ranked features in supervised classification problems
We compared the performance of PhyloRelief coupled With the RF classifier to LEfSe [30], an algorithm that uses statistical tests for biomarker discovery, to MetaPhyl, a recent phylogeny-based method for the classification of microbial communities [31] and to Random Forest, used both as classifier and feature selection method.
feature selection is mentioned in 7 sentences in this paper.
Topics mentioned in this paper:
Christopher R. S. Banerji, Simone Severini, Carlos Caldas, Andrew E. Teschendorff
Discussion
We thus used feature selection to derive a small set of genes which capture the prognostic power of signalling entropy independently of other clinical variables, thus representing a more readily applicable quantifier of stem-ness and intra-tumour heterogeneity.
Discussion
In comparing signalling entropy to signatures such as MammaPrint it is worth pointing out that a direct comparison is unfair signalling entropy does not involve feature selection .
Discussion
Although signalling entropy was not found to outperform existing prognostic markers in lung adenocarcinoma, by using the SE score, derived by signalling entropy guided feature selection , it was possible to outperform existing state of the art prognostic factors such as CADMI eXpression across independent data sets.
Signalling entropy’s prognostic power in breast cancer can be represented by a small number of genes
By using signalling entropy to refine a set of prognostic genes identified by Cox regression, our approach refines the feature selection approach based on correlation with outcome [24].
Signalling entropy’s prognostic power in breast cancer can be represented by a small number of genes
Criticism of feature selection for prognostic classifiers based on gene sets ranked by correlation with outcome has stemmed from the considerable discordance of such features between data sets [47, 48].
feature selection is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
Minseung Kim, Violeta Zorraquino, Ilias Tagkopoulos
Feature selection by mutual information
Feature selection by mutual information
Feature selection by mutual information
Mutual information is a stochastic measure of dependence [69] and it has been widely applied in feature selection in order to find an informative subset for model training [70].
Feature selection by mutual information
All the analyses in this study other than the cross-validation of model used the features selected from the complete data.
feature selection is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: