A Bayesian-Hebbian Learning Theory for Spiking Neurons

CBN (Computational Biology and Neurocomputing) seminars

Friday 01 June 2012
from 10:00
to 11:00 at
RB35

Speaker :

Philip Tully (CB/CSC/KTH)

Abstract :

The Bayesian statistical inference paradigm has provided an intuitively attractive framework of how the nervous system could theoretically represent uncertainty by combining prior knowledge with the information it accumulates during sensory input events. These ideas have received increasing attention from the neuroscience community in light of recent experimental findings describing evidence in which populations of spiking neurons could code for probability distributions of stimuli during perceptual, motor control, and decision making tasks. However, there are competing theoretical views disputing at which level of complexity in the neural substrate these probabilistic computations should be performed. Furthermore, it remains unclear how more biologically plausible rules such as Hebbian plasticity could be used to learn conditional probability distributions in discrete time.
To address these issues, an artificial neural network (ANN) model of unsupervised Hebbian learning based on graded unit activations, the incremental Bayesian Confidence Propagation Neural Network (BCPNN), is converted to a frequency-based biophysical learning rule implemented at the synaptic level of neural circuitry. In this interpretation, a BCPNN synapse low-pass filters binary spiking events, estimating the confidence of presynaptic spikes (prior distribution) along with spiking activity of the postsynaptic neuron j in the context of the input P (posterior distribution), to determine its degree of belief in the events. Estimation of the terms used for the Bayesian weight update is performed using a set of differential equations implementing exponentially weighted moving averages. The time constants of the decaying exponentials have a neurobiologically realistic mapping to plasticity-relevant synaptic changes that take place during learning (e.g. the flow of calcium ions through the postsynaptic spine via NMDA receptors).
We show how terms associated with the ANN version of the model can represent the behavior of neuron populations under more realistic conditions. For example, the bias term in the artificial context, which previously indicated the level of unit activity, is shown to mimic a neurons electrical properties that are modified as they are exposed to prolonged synaptic input, i.e. its intrinsic excitability. Also, the description of a Kappa term which functionally served as a global 'print now' signal to increase sensitivity during learning epochs is refined to express effects of a neuromodulator (e.g. dopamine) on synaptic plasticity, a phenomenon garnering increasing experimental support. We show that for the classic experimental pre-post pairing scheme, BCPNN synaptic dynamics exhibit temporally symmetric Hebbian spike timing-dependence that are qualitatively similar to measurements made in hippocampal culture preparations. Despite this seemingly destabilizing learning property, the spiking rule exhibits a unimodal equilibrium weight distribution that is well-fitted by a Gaussian, owing to the fixed-point dynamics of BCPNN in its artificial description.