Index of papers in PLOS Comp. Biol. that mention
  • learning rule
Jaldert O. Rombouts, Sander M. Bohte, Pieter R. Roelfsema
Abstract
The resulting learning rule endows neural networks with the capacity to create new working memory representations of task relevant information as persistent activity.
Comparison to previous modeling approaches
The PBWM framework uses more than ten modules, each with its own dynamics and learning rules , making formal analysis difficult.
Learning
This temporal difference learning rule is known as SARSA [7,34]:
Learning
Note that AuGMEnT uses a four-factor learning rule for synapses vij.
Results
We will now present the main theoretical result, which is that the AuGMEnT learning rules minimize the temporal difference errors (Eqn.
Results
We will first establish the equivalence of online gradient descent defined in Equation (19) and the AuGMEnT learning rule for the synaptic weights wflt) from the regular units onto the time step t-1 should change as: leaving the other weights k7éa unchanged.
Results
AuGMEnT provides a biological implementation of the well known RL method called SARSA, although it also goes beyond traditional SARSA [7] by (i) including memory units (ii) representing the current state of the external world as a vector of activity at the input layer (iii) providing an association layer that aids in computing Q-values that depend non-linearly on the input, thus providing a biologically plausible equivalent of the error-backpropagation learning rule [8], and (iv) using synaptic tags and traces (Fig.
Role of attentional feedback and neuromodulators in learning
AuGMEnT implements a four-factor learning rule .
Synaptic tags and synaptic traces
The learning rule for the synapses onto the regular (i.e.
Synaptic tags and synaptic traces
We note, however, that the same learning rule is obtained if these synapses also have traces that decay within one time-step.
Vibrotactile discrimination task
However, these previous studies did not yet address trial-and-error learning of the vibrotactile discrimination task with a biologically plausible learning rule .
learning rule is mentioned in 14 sentences in this paper.
Topics mentioned in this paper:
Matteo Mainetti, Giorgio A. Ascoli
Author Summary
We introduce and evaluate a new biologically-motivated learning rule for neural networks.
Author Summary
Such basic geometric requirement, which was explicitly recognized in Donald Hebb’s original formulation of synaptic plasticity, is not usually accounted for in neural network learning rules .
BIG Learning in Small-World Graphs: Ability to Differentiate Real from Spurious Associations
To validate the above results against broadly applicable cases besides word associations, we tested the BIG ADO learning rule in a general class of random small-world graphs [19] resembling real-world architectures, organizations, and interactions (Fig.
BIG Learning in Watts-Strogatz Networks
To test the BIG ADO learning rule in more broadly applicable cases than noun-adjective associations, we generated small-world graphs adapting the algorithm of Watts and Strogatz [19].
Introduction
Here we formulate this notion quantitatively with a new neural network learning rule , demonstrating by construction that ADO is a suitable mechanism for BIG learning.
Neural Network Model and the BIG ADO Learning Rule
Neural Network Model and the BIG ADO Learning Rule
Neural Network Model and the BIG ADO Learning Rule
The learning rule introduced in this work implements a form of structural plasticity in neural networks that incorporates the constraint of proximity between pre and post-synaptic partners or axonal-dendritic overlap (ADO): if two neurons at and (9 fire together, a connection from a to b is only formed if the axon of a comes within a threshold distance from a dendrite of b.
Neural Network Model and the BIG ADO Learning Rule
The learning rule described above relates closely to earlier works proposing similar learning mechanisms to explain generalization and grammatical rule extraction.
Pre-Training and Testing Design
This work investigates the computational characteristics of the BIG ADO learning rule starting from well-defined reality-generating graphs (described in the next subsection of these Materials and Methods).
Robustness Analysis and Optimal Conditions
Although the adopted connectionist framework is an oversimplified model of nervous systems, this simplicity also reflects the foundational applicability of the BIG ADO learning rule .
Word Association Graph
The dataset of word associations used in the first test of the BIG ADO learning rule (Fig.
learning rule is mentioned in 16 sentences in this paper.
Topics mentioned in this paper:
Naoki Hiratani, Tomoki Fukai
Discussion
Furthermore, we derived an STDP-like online learning rule by considering an approximation of Bayesian ICA with sequence sampling.
Excitatory and inhibitory STDP cooperatively shape structured lateral connections
With these learning rules , the lateral connections successfully learn a mutual inhibition structure (Fig 5D); however, this learning is achievable only when the learning of a hidden external structure is possible from the random lateral connections (magenta lines in Fig 5B and 5C; note that orange points are hidden by magenta points because they show similar behaviors in noisy cases), which means either when crosstalk noise is low or two sources have similar amplitudes.
Suboptimality of STDP
In addition, local minima are often unavoidable for online learning rules .
pf = 1 — <1 — rsAofi [1 — am: ask/szy] ,qsk = 2; 3 M + 1/2>At12exp[—<k + mam/at].
In this approximation, the learning rule of the estimated response probability matriX Q obeys where Y is the sampled sequence, and pik(Y1‘k'1) is the sample based approximation of pik in the previous equation.
learning rule is mentioned in 4 sentences in this paper.
Topics mentioned in this paper:
Kai Olav Ellefsen, Jean-Baptiste Mouret, Jeff Clune
Discussion
Additionally, while we focused primarily on evolution specifying modular architectures, those architectures could also emerge via intra-life learning rules that lead to modular neural architectures.
Discussion
More generally, exploring the degree to which evolution encodes learning rules that lead to modular architectures, as opposed to hard coding modular architectures, is an interesting area for future research.
Learning Model
The result is a Hebbian learning rule that is regulated by the inputs from neuromodulatory neurons, allowing the learning rate of specific connections to be increased or decreased in specific circumstances.
learning rule is mentioned in 3 sentences in this paper.
Topics mentioned in this paper: