Distributional assumptions implicit in MID | Although the MID estimator was originally described in information-theoretic language, we have shown that, when used with plugin estimators for information-theoretic quantities, it is mathematically identical to the maximum likelihood estimator for a linear-nonlinear-Pois-son (LNP) encoding model. |
Distributional assumptions implicit in MID | However, conditionally independent spiking is also the fundamental assumption underlying the Poisson model and, as we have shown, the standard MID estimator (based on the KL-divergence between histograms) is mathematically identical to the maximum likelihood estimator for an LNP model with piecewise constant nonlinearity. |
Distributional assumptions implicit in MID | Thus, MID achieves no more and no less than a maXimum likelihood estimator for a Poisson response model. |
Equivalence of MID and maximum-likelihood LNP | The estimates f) and (1 are also known as the “plug-in” estimates, and correspond to maximum likelihood estimates for the densities in question. |
Generalizations | From a model-based perspective, the generalizations correspond to maximum likelihood estimators for a linear-nonlinear-Bernoulli (LNB) model (for binary spike counts), and the linear-nonlinear-Count (LNC) model (for arbitrary discrete spike counts). |
Generalizations | We could analogously define arbitrary “LNX” models, where X stands in for any probability distribution over the neural response (analog or discrete), and perform dimensionality reduction by maximizing likelihood for the filter parameters under this model. |
Generalizations | The maximum likelihood estimate is therefore equally a maximal-information estimate. |
Identifying high-dimensional subspaces | For each neuron, we fit an LNP model using: (1) the information-theoretic spike-triggered average and covariance (iSTAC) estimator [19]; and (2) the maximum likelihood estimator for an LNP model with nonlinearity parametrized by first-order CBFs. |
Identifying high-dimensional subspaces | 8 compares the performance of these estimators on neural data, and illustrates the ability to tractably recover high-dimensional feature spaces using maximum likelihood methods, provided that the nonlinearity is parametrized appropriately. |
Introduction | We show that this estimator is formally identical to the maximum likelihood (ML) estimator for the parameters of a linear-nonlinear-Poisson (LNP) encoding model. |
Relationship to previous work | Such a nonlinearity readily yields maximum likelihood estimators based on the STA and STC. |
Application to real outbreaks | Our maximum likelihood estimates suggest that the over 20 age group had substantial preexisting immunity against monkeypox and H5N1, and no immunity against H7N9 or MERS-CoV (Fig. |
Application to real outbreaks | Each point shows joint maximum likelihood estimate of the effective reproduction number if both age groups were equally susceptible, p, and the relative susceptibility of over 20s, 8. |
Application to real outbreaks | When censoring was included, our estimate for R increased slightly to 0.77 (0.57—1.03), but our maximum likelihood estimate for S remained the same. |
Estimating transmissibility and pre-existing immunity | We simulated 50 spillover events, and found the maximum likelihood estimate of R0 and S. We repeated this process for 1000 sets of outbreaks, obtaining reliable estimates of both R0 and 8 (Figs. |
Inference | We obtained maximum likelihood estimates for 9 = {R0, 8} by calculating the two-dimen-sional likelihood surface and using a simple grid-search algorithm to find the maximum point. |
Inference | Confidence intervals were calculated using profile likelihoods: for each value of R0, we found the maximum likelihood across all possible values of S; the 95% confidence interval was equivalent to the region of parameter space that was within 1.92 log-likelihood points of the maximum-likelihood estimate for both parameters [42]. |
Performance metrics | It was not possible to obtain a tractable expression for the maximum likelihood (ML) estimates of p and S, and hence R, using Equation 13. |
Performance metrics | Instead we calculated the ML estimate of the reproduction number, R, using the numerically estimated maXimum likelihood values for p and S. |
Supporting Information | Blue line, relative error in maximum likelihood estimate for R in single-type model; red line, |
Supporting Information | We simulated 1000 sets of 50 outbreaks, and found the maximum likelihood estimates (MLEs) for parameters for each set. |
Supporting Information | We simulated 1000 sets of 50 outbreaks, and found the maximum likelihood estimates (MLEs) for parameters for each set. |
Modeling Relief Consumption Using Heuristics | sT, at the maximum likelihood parameters, 0, of each model. |
Modeling Relief Consumption Using Heuristics | Mean predicted consumption levels simulated from the maximum likelihood parameterizations of each model over each 10 trials of the experiment for each participant are plotted against the same metric derived from the observed data. |
Modeling Relief Consumption Using Heuristics | sT, at the maximum likelihood parameterization, 0, of each model. |
Predicting Consumption from One-Off Choices between Delayed Pains | To estimate the proportion of variance in the observed data accounted for by the models, we found the mean consumption level for each participant across each 10 trials of the experiment, before calculating the same measure by simulating 10000 consumption paths resulting from the maximum likelihood parameterization of the model. |
Relief Consumption Experiment | Model fitting followed a maximum likelihood framework, using the softmaX policy to generate the probability of observing each possible (rounded) level of relief consumption, given a particular set of model parameters. |
Relief Consumption Experiment | For each subject 10 iterations of the optimization were performed, and the maximum likelihood estimate across all iterations was selected. |
Supporting Information | 5, overlaid With consumption simulated (red circles) from the maximum likelihood parameterization, 6, of the Income Maximization model. |
Supporting Information | Whilst the model fitting process takes account of the observed state of capital on each trial, the simulated paths here are sampled anew from the maximum likelihood parameterization Without reference to the data. |
Supporting Information | sT, at the maximum likelihood parameterization, 9, of each model overlaid With observed consumption data (White circles). |
Introduction | We test the neutral null hypothesis using a maximum likelihood approach (using an exact expression [26] for the likelihood of a sample from the SNM), where p-values are evaluated by a parametric bootstrap procedure. |
Testing the neutral null model | In order to quantify Whether a particular data set is consistent with neutral theory, we adopt a maximum likelihood approach together with a parametric bootstrap as used by Walker and Cyr [45] and Rosindell and Etienne [63]. |
Testing the neutral null model | We choose the maximized likelihood of the neutral model as our test statistic. |
Testing the neutral null model | For a test data set XT, we find the maximum likelihood parameter estimates (mMT, GMT , i.e. |
Abstract | This is true for all methods that evaluate a phylo-genetic network solely on the basis of how well the displayed trees fit the available data, including all methods based on input data consisting of clades, triples, quartets, or trees with any number of taxa, and also sequence-based approaches such as popular formalisa—tions of maximum parsimony and maximum likelihood for networks. |
An Identifiability Problem | The same holds for many, sequence-based, maximum parsimony and maximum likelihood approaches proposed in recent papers. |
An Identifiability Problem | As for maximum likelihood (ML), Nakhleh and collaborators [2, 32, 33, 38] have proposed an elegant framework whereby a phylogenetic network N is not only described by a network topology, but also edge lengths and inheritance probabilities associated to the reticulations of N. As a result, any tree T displayed by N has edge lengths—allowing the calculation of its likelihood Pr(A| T) with respect to any alignment A—and an associated probability of being observed Pr(T|N). |
Introduction | In the following, we describe how this applies to the two main approaches for explicit network reconstruction: consistency-based approaches (see [28] for a survey)—seeking a network consistent with a number of prior evolutionary inferences (typically trees or groupings of taxa)—and sequence-based approaches, such as standard formulations of maximum parsimony and maximum likelihood for networks [2, 29—33]. |
Application to pathogen infection experiments | The maximum likelihood attachments of features to the knockdown genes and the null node are shown in supplementary 815 Fig and 816 Fig, together with a detailed description of the different feature types. |
NEMix inference | A derivation of the expected hidden log-likelihood and the maximum likelihood estimates is given in ‘Estimating the hidden signal’ of 81 Text. |
Simulation study | For NEMs and sc-NEMs, we used maximum likelihood estimation to infer 6 and in the NEMix it is estimated by in an EM algorithm. |