Index of papers in March 2015 that mention
  • neural network
Matteo Mainetti, Giorgio A. Ascoli
Abstract
Using neural network simulations, we demonstrate that ADO is a suitable mechanism for BIG learning.
Author Summary
We introduce and evaluate a new biologically-motivated learning rule for neural networks .
Author Summary
Such basic geometric requirement, which was explicitly recognized in Donald Hebb’s original formulation of synaptic plasticity, is not usually accounted for in neural network learning rules.
Author Summary
Thus, the selectivity of synaptic formation implied by the ADO requirement is shown to provide a fundamental cognitive advantage over classic artificial neural networks .
Introduction
Here we formulate this notion quantitatively with a new neural network learning rule, demonstrating by construction that ADO is a suitable mechanism for BIG learning.
Neural Network Model and the BIG ADO Learning Rule
Neural Network Model and the BIG ADO Learning Rule
Neural Network Model and the BIG ADO Learning Rule
This work assumes the classic model of neural networks as directed graphs in which nodes represent neurons and each directional edge represents a connection between the axon of the pre-synaptic neuron and the dendrite of the post-synaptic neuron.
Neural Network Model and the BIG ADO Learning Rule
The learning rule introduced in this work implements a form of structural plasticity in neural networks that incorporates the constraint of proximity between pre and post-synaptic partners or axonal-dendritic overlap (ADO): if two neurons at and (9 fire together, a connection from a to b is only formed if the axon of a comes within a threshold distance from a dendrite of b.
Pre-Training and Testing Design
Specifically, when initially connecting the neural network , we select the pre-training subset of edges non-uniformly from the reality-generating graph, such that distinct groups of nodes are differentially represented.
neural network is mentioned in 20 sentences in this paper.
Topics mentioned in this paper:
Tamar Friedlander, Avraham E. Mayo, Tsvi Tlusty, Uri Alon
Discussion
Bow-tie structures are also common in multilayered artificial neural networks used for classification and dimensionality reduction problems.
Discussion
While there are parallels in the functional role of bow-ties there with the biological bow-ties which are the focus of this study, these artificial neural networks are designed a priori to have this bow-tie structure.
Discussion
Multilayered neural networks often use an intermediate (hidden) layer whose number of nodes is smaller than the number of input and output nodes [30,75].
E E
To test this hypothesis we employed a well-studied problem of image analysis using perceptron nonlinear neural networks [65,66].
Introduction
Generically, in fields as diverse as artificial neural networks [30] and evolution of biological networks, simulations result in highly connected networks with no bow-tie [31—37].
Retina problem
We tested the evolution of bow-tie networks in this nonlinear problem which resembles standard neural network studies [39,65,84].
neural network is mentioned in 8 sentences in this paper.
Topics mentioned in this paper:
Jaldert O. Rombouts, Sander M. Bohte, Pieter R. Roelfsema
Abstract
The resulting learning rule endows neural networks with the capacity to create new working memory representations of task relevant information as persistent activity.
Biological plausibility, biological detail and future work
These connections might further expand the set of tasks that neural networks can master if trained by trial-and-error.
Comparison to previous modeling approaches
Earlier neural networks models used “backpropagation-through-time”, but its mechanisms are biologically implausible [77].
Discussion
To the best of our knowledge, AuGMEnT is the first biologically plausible learning scheme that implements SARSA in a multilayer neural network equipped with working memory.
Discussion
These on-policy methods appear to be more stable than off-policy algorithms (such as Q-learning which considers transitions not experienced by the network), if combined with neural networks (see e.g.
Vibrotactile discrimination task
Several models addressed how neural network models can store F1 and compare it to F2 [46—48].
neural network is mentioned in 6 sentences in this paper.
Topics mentioned in this paper:
Adrien Wohrer, Christian K. Machens
Case 1: all cells recorded
We implemented a recurrent neural network with N = 5000 integrate-and-fire neurons that encodes some input stimulus s in the spiking activity of its neurons, and we built a perceptual readout from that network according to our model, with parameters K* = 80 neurons, w* = 50 ms, t; = 100 ms, and a; = 1 stimulus units (see Methods for a description of the network, and supporting 81 Text).
Case 3: less than K* cells recorded
Another pathological situation could be a neural network specifically designed to dispatch information non-redundantly across the full population [31, 32], resulting in a few ‘global’ modes of activity with very large SNR—meaning high ym and low firm.
Sensitivity and CC signals as a function of K
Validation on a simulated neural network
Sensitivity and CC signals as a function of K
The neural network used to test our methods is described in detail in supporting 81 Text (section 3).
Sensitivity and CC signals as a function of K
We implemented and simulated the network using Brian, a spiking neural network simulator in Python [39].
Supporting Information
Contains additional information about Choice Probabilities (section 1), the influence of parameter w on stimulus sensitivity (section 2), the encoding neural network used for testing the method (section 3), the Bayesian regularization procedure on Fisher’s linear discriminant (section 4), unbiased computation of CC indicators in the presence of measurement noise (section 5), and an extended readout model with variable extraction time tR (section 6).
neural network is mentioned in 6 sentences in this paper.
Topics mentioned in this paper: