Index of papers in April 2015 that mention
  • training set
Mei Zhan, Matthew M. Crane, Eugeni V. Entchev, Antonio Caballero, Diana Andrea Fernandes de Abreu, QueeLim Ch’ng, Hang Lu
Bright-Field Head Identification
However, in addition to informative feature selection and the curation of a representative training set , the performance of SVM classification models is subject to several parameters associated with the model itself and its kernel function [34, 48].
Bright-Field Head Identification
Thus, to ensure good performance of the final SVM model, we first optimize model parameters based on fivefold cross-validation on the training set (Fig 3A and 3B, Materials and Methods).
Bright-Field Head Identification
To visualize feature and classifier performance, we use Fisher’s linear discriminant analysis to linearly project the 14 layer 1 features of the training set onto two dimensions that show maximum separation between grinder and background particles (Fig 3C).
Discussion
In both layers of classification, we adopt a supervised learning approach that depends upon human annotation of training sets of data.
Discussion
Overall, our proposed methodology provides a pipeline that streamlines and formalizes the image processing steps after the annotation of a training set .
Identification of Fluorescently Labeled Cells
Using this feature set, we optimize and train a layer 1 SVM classifier using a manually annotated training set (n = 218) (S4A Fig, Materials and Methods) and show that it is sufficient for identifying cellular regions with relatively high sensitivity and specificity (Fig 5D and 84A Fig).
Identification of Fluorescently Labeled Cells
To construct a new problem-specific layer 2 classifier based on relationships within these tetrad candidates, we optimize and train a SVM model based on a manually annotated training set (n = 324) (84C Fig).
training set is mentioned in 13 sentences in this paper.
Topics mentioned in this paper:
Sébastien Giguère, François Laviolette, Mario Marchand, Denise Tremblay, Sylvain Moineau, Xinxia Liang, Éric Biron, Jacques Corbeil
Application in combinatorial drug discovery
They are then used as a training set to produce a predictor hy.
E" o \l
Finally, all predicted antimicrobial peptides are significantly different from those of the training set , sharing only 40% similarity with their most similar peptide in the CAMPs dataset.
Improving the bioactivity of peptides
For the CAMPS dataset, the proposed approach predicted that peptide WWKWWKRLRRLFLLV should have an antibacterial potency of 1.09, a logarithmic improvement of 0.266 over the best peptide in the training set (GWRLIKKILRVFKGL, 0.824), and a substantial improvement over the average potency of that dataset (average of 0.39).
Improving the bioactivity of peptides
On the BPPs dataset, the proposed approach predicted that the pentapeptide IEWAK should have an activity of 2.195, slightly less than the best peptide of the training set (VEWAK, 2.73, predicted as 2.192).
Improving the bioactivity of peptides
Hence, our proposed learning algorithm predicts new peptides having biological activities equivalent to the best of the training set and, in some cases, substantially improved activities.
Introduction
By starting with a training set containing approximately 100 peptides with their corresponding validated bioactivity (binding affinity, IC50, etc), we expect that a state-of-the-art kernel method will give a bioactivity model which is sufficiently accurate to find new peptides with activities higher than the 100 used to learn the model.
Introduction
Moreover, the proposed approach can be employed without known ligands for the target protein because it can leverage recent multi-target machine learning predictors [10, 14] where ligands for similar targets can serve as an initial training set .
Simulation of a drug discovery
Correlation coefficient of hmndom predictions on the CAMPs data while varying R, the number of random peptides used as training set .
The machine learning approach
For the sake of comparison, we would like to highlight that when fiq(y) = 1/ m, k = 1, UP = O, and ac = 0 the predictor hy(x) in Equation (6) reduces to predict the probability of sequence X given the position-specific weight matriX (PSWM) obtained from the training set .
training set is mentioned in 9 sentences in this paper.
Topics mentioned in this paper: