* Due to the weather, the speaker is unable present on January 29.
The synaptic sampling hypothesis
A salient feature of synapses in the brain is that neurotransmitter release is probabilistic. This seems both wasteful and unnecessarily noisy. Here, however we argue that there is a normative explanation for it: we suggest that neurons use Bayesian inference to compute a distribution over the optimal synaptic weights, and that they communicate the uncertainty associated with that distribution by sampling from it. Synaptic noise thus plays an important role: it provides the stochasticity necessary for the generation of random samples. This theory reproduces several properties of synaptic weights: a log normal weight distribution, the linear relationship between the mean and variance of PSPs, and the dependence of the noise level on distance from the cell soma. Furthermore, our inference framework allows us to reproduce Purkinje cell learning rules using only properties of the climbing fiber feedback signal. Finally, we make three testable predictions: synaptic noise should be related to the presynaptic firing rate, with higher firing rates implying lower noise; the plasticity of a synapse (the ease with which long term potentiation or depression can be induced) should depend on the noise, with larger synaptic noise implying higher plasticity in vivo; and plasticity should scale inversely with the number of active presynaptic neurons.
Host: Xaq Pitkow