Tuesday, July 17, 2012

Johaness Bill: July 16th

Probabilistic inference and autonomous learning in recurrent networks of spiking neurons 

Numerous findings from cognitive science and neuroscience indicate that mammals learn and maintain an internal model of their environment, and that they employ this model during perception and decision making in a statistically optimal fashion. Indeed, recent experimental studies suggest that the required computational machinery for probabilistic inference and learning can be traced down to the level of individual spiking neurons in recurrent networks. 

At the Institute for Theoretical Computer Science in Graz, we examine (analytically and through computer simulations) how recurrent neural networks can represent complex joint probability distributions in their transient spike pattern, how external input can be integrated by networks to a Bayesian posterior distribution, and how local synaptic learning rules enable spiking neural networks to autonomously optimize their internal model of the observed input statistics. 

In the talk, I aim to discuss approaches of how recurrent spiking networks can sample from graphical models by means of their internal dynamics, and how spike-timing dependent plasticity rules can implement maximum likelihood learning of generative models.

Tim Machado: July 9th

The firing patterns of motor neurons represent the product of neural 
computation in the motor system. EMG recordings are often used as a 
proxy for this activity, given the direct relationship between motor 
neuron firing rate and muscle contraction. However, there are a 
variety of motor neuron subtypes with varied synaptic inputs and 
intrinsic properties, suggesting that this relationship is complex. 
Indeed, studies have shown that different compartments of individual 
muscles are activated asynchronously during some motor tasks—implying 
heterogeneity in firing across single motor pools. To measure the 
activity of many identified motor neurons simultaneously, we have 
combined population calcium imaging at cellular resolution with the 
use of a deconvolution algorithm that infers underlying spiking 
patterns from Ca++ transients. Using this approach we set out to 
examine the firing properties of neurons within an individual pool of 
motor neurons, and in particular, to compare the activity of 
individual neurons belonging to synergist (e.g. flexor-flexor) and 
antagonist (flexor-extensor) pools. 

We imaged motor neurons in the spinal cord of neonatal mice that were 
either loaded with synthetic calcium indicator or expressed GCaMP3. To 
identify the muscle targets of the loaded motor neurons we injected 
two fluorophore conjugated variants of the retrograde tracer cholera 
toxin B into specific antagonist or synergist muscles. To examine the 
correlated firing of motor neurons during network activity in our in 
vitro preparation, a current pulse train was delivered to a sacral 
dorsal root in order to evoke a locomotor-like state. The onset and 
evolution of this rhythmic state was measured with suction electrode 
recordings from multiple ventral roots. To calibrate optical 
measurements, and to determine the upper limit of correlated firing, 
motor neurons were antidromically activated via ventral root 
stimulation. The optical responses to the antidromic train were used 
to directly fit a model to our data that related the fluorescence 
measurements to an approximate spike train. Preliminary observations 
from datasets containing hundreds of identified motor neurons suggests 
heterogeneity in neuronal firing within individual pools, as well as 
alternation in the firing between antagonist pools. In the future, 
this approach will be used to examine the activity patterns of 
molecularly-defined interneuron populations as a function of firing of 
identified motor neurons.

Kamiar Rad: June 26th

High Dimensional Efficient Population Estimation

Alexandro Ramirez: June 5th

Title: Fast neural encoding model estimation via expected log-likelihoods 

Receptive fields are traditionally measured using the spike-triggered average (STA). Recent work has shown that the STA is a special case of a family of estimators derived from the “expected log-likelihood” of a Poisson model. We generalize these results to the broad class of neuronal response models known as generalized linear models (GLM).  We show that expected log-likelihoods can speed up by orders of magnitude computations involving the GLM log-likelihood, e.g parameter estimation, marginal likelihood calculations, etc., under some simple conditions on the priors and likelihoods involved.  Second, we perform a risk analysis, using both analytic and numerical methods, and show that the “expected log- likelihood” estimators come with a small cost in accuracy compared to standard MAP estimates.  When MAP accuracy is desired, we show that running a few pre-conditioned conjugate gradient iterations on the GLM log-likelihood initialized at the "expected log-likelihood" can lead to an estimator that is as accurate as the MAP. We use multi-unit, primate retinal responses to stimuli with naturalistic correlation to validate our findings.