Index of papers in PLOS Comp. Biol. that mention
  • mutual information
Ross S. Williamson, Maneesh Sahani, Jonathan W. Pillow
DKL<p(x|r = 0) p(x)) is the information (per spike) carried by silences, and
Thus, for example, if 20% of the bins in a binary spike train contain a spike, the standard MID estimator will necessarily neglect at least 10% of the total mutual information .
Distributional assumptions implicit in MID
4, MID does not maximize the mutual information between the projected stimulus and the spike response when the distribution of spikes conditioned on stimuli is not Poisson; it is an inconsistent estimator for the relevant stimulus subspace in such cases.
Generalizations
For both models, we obtained an equivalent relationship between log-likelihood and an estimate of mutual information between stimulus and response.
Methods
In the limit of small p (i.e., the Poisson limit), the mutual information is dominated by the vergence between Q0 and Q1 is obtained when p =
Models with Bernoulli spiking
To show this, we derive the mutual information between the stimulus and a Bernoulli distributed spike count, and show that this quantity is closely related to the log-likelihood under a linear-nonlin-ear-Bernoulli encoding model.
Models with Bernoulli spiking
The mutual information between the projected stimulus
Models with arbitrary spike count distributions
As we Will see, maximizing the mutual information based on histogram estimators is once again equivalent to maximizing the likelihood of an LN model With pieceWise constant mappings from the linear stimulus projection to count probabilities.
minimum information loss for binary spiking
If the binned spike-counts rt measured in response to stimuli st are not Poisson distributed, the projection matrix K which maximizes the mutual information between KTs and r can be found as follows.
minimum information loss for binary spiking
To ease comparison With the single-spike information, Which is measured in bits per spike, we normalize the mutual information by the mean spike count to obtain: ized information carried by the j-spike responses.
minimum information loss for binary spiking
We can estimate the mutual information from data using a histogram based plugin estimator:
mutual information is mentioned in 14 sentences in this paper.
Topics mentioned in this paper:
William F. Flynn, Max W. Chang, Zhiqiang Tan, Glenn Oliveira, Jinyun Yuan, Jason F. Okulicz, Bruce E. Torbett, Ronald M. Levy
Abstract
To examine covariation of mutations between two different sites using deep sequencing data, we developed an approach to estimate the tight bounds on the two-site bivariate probabilities in each viral sample, and the mutual information between pairs of positions based on all the bounds.
Correlation analysis in using bound estimates protease captures known pair correlations
Shown is a plot of the precision (also known as positive predictive value (PPV)) forthe top 40% of correlated PR-PR pairs ranked by mutual information .
Correlation analysis in using bound estimates protease captures known pair correlations
True positives were determined through a mutual information calculation similar to the calculations in [3].
Covariation of mutations in Gag-protease proteins
In the second step, using these joint probabilities, we assess the correlation implicit in the joint probabilities involving two positions in the viral genome using mutual information (MI).
Covariation of mutations in Gag-protease proteins
We quantify this deviation by the mutual information (MI); pairs with the largest mutual information have the strongest covariation.
Discussion
The strongest 50 correlations as measured by mutual information (Ml) from the following regions are shown: Gag-Gag (blue), Gag-PR (red).
Discussion
There exist several methods, some based on mutual information , which have been developed to extract direct structural contacts (typically <8A) from multiple sequence alignments [37,59—63]; it is possible to adapt these methods to detect direct structural propensities using covariation extracted from deep sequencing.
Introduction
This then allowed us to utilize mutual information (M1) to calculate the pair correlations between pairs of positions in gag.
Strongest correlations in Gag indicate functional and structural patterns
While Gag is not the primary target of protease inhibitors, we observe the correlations as measured by mutual information within Gag proteins are of similar magnitudes as in protease (Tables 2, 3, 4).
Strongest correlations in Gag indicate functional and structural patterns
We find that the pair of positions with the largest mutual information identified in this study, Gag M228-G248, is within 6A in all CA structures.
Validation of bivariate marginal estimation
The mutual information (MI) computed for each pair using the bounding procedure is in good agreement with the MI computed using the known bivariate marginal probabilities, as shown in Fig 4.
mutual information is mentioned in 18 sentences in this paper.
Topics mentioned in this paper:
Minseung Kim, Violeta Zorraquino, Ilias Tagkopoulos
Biomarker discovery through functional and network analysis
The decrease of mutual information in ranked genes follows an inverse logarithmic relationship (Fig.
Biomarker discovery through functional and network analysis
For each classifier, we selected the gene subset that accounts for the top 10% of the mutual information content of all genes, yielding feature sets that range from 49 to 136 genes.
Feature selection by mutual information
Feature selection by mutual information
Feature selection by mutual information
Mutual information is a stochastic measure of dependence [69] and it has been widely applied in feature selection in order to find an informative subset for model training [70].
Feature selection by mutual information
In our work, each of the eight models were trained with the top k-ranked genes based on their mutual information (M1) to the label where MI is measured by
Selection of most informative genes and functional enrichment analysis
The most informative genes are selected by measuring the mutual information (in bits) for each of the characteristic variables and then selecting the top 10% genes based on their information content.
Selection of most informative genes and functional enrichment analysis
In addition to DAVID, we have performed a GSEA analysis [75] where each gene is ranked by its mutual information (S9 Table).
Supporting Information
The intersection of the feature gene set when mutual information (MI) and differential expression (DEG) are used for ranking.
Supporting Information
Ranked list of all genes in the EcoGEC compendium based on their mutual information for the phase, growth and aerobic classifier, before and after iterative learning.
Supporting Information
Ranks and mutual information of the genes selected in each classifier of carbon source and oxygen supply.
mutual information is mentioned in 10 sentences in this paper.
Topics mentioned in this paper:
Naoki Hiratani, Tomoki Fukai
Evaluation of the performance
Mutual information .
Evaluation of the performance
Therefore, mutual information can be defined as
Lateral inhibition enhances minor source detection by STDP
The same argument holds if mutual information is used for performance evaluation (green lines in Fig 2D and 2E).
Model
Both cross-correlation and mutual information behave as they do in the Poisson model, but the performance is slightly better, possibly because the dynamics are deterministic (Fig ID and IE, SIB and SIC Fig); however, membrane potentials show different responses for correlation events (SID Fig) because output neurons are constantly in high-conductance states, so that correlation events immediately cause spikes.
Supporting Information
(B) Cross-correlation and mutual information calculated for various delays.
mutual information is mentioned in 5 sentences in this paper.
Topics mentioned in this paper:
David Lovell, Vera Pawlowsky-Glahn, Juan José Egozcue, Samuel Marguerat, Jürg Bähler
Caution about correlation
This concern extends to methods based on mutual information (e.g., relevance networks [17]) since, as Fig.
Caution about correlation
1 shows, the bivariate joint distribution of relative abundances (from which mutual information is estimated) can be quite different from the bivariate joint distribution of the absolute abundances that gave rise to them.
Correlations between relative abundances tell us absolutely nothing
This many-to-one mapping means that other measures of statistical association (e.g., rank correlations or mutual information ) will not tell us anything either when applied to purely relative data.
Introduction
Thus, relative data is also problematic for mutual information and other distributional measures of association.
mutual information is mentioned in 4 sentences in this paper.
Topics mentioned in this paper: