Thursday, January 29, 2015

Uygar Sümbül: Feb 4th

Towards automated segmentation of neurons from Brainbow images

A necessary step to learn how neural circuits account for
observed behaviors in health and disease is to map the connectivity of
individual neurons. Stochastic expression of fluorescent proteins with
different colors where individual cells express one of many
distinguishable hues – the Brainbow method – has generated striking
images of nervous tissues. However, its use has been limited. One
basic shortcoming has been the various noise sources in fluorescence
expression within individual neurons. Here, we propose a method to
automate the segmentation of neurons in Brainbow image stacks using
spectral clustering.

 

 

Monday, January 26, 2015

Brian DePasqual: Jan 28

Embedding low-dimensional continuous dynamical systems in recurrently connected spiking neural networks

Despite recent advances in training recurrently connected firing-rate networks, the application of supervised learning algorithms to biologically plausible recurrently connected spiking neural networks remains a challenge. Such models, when trained to directly replicate neural data, hold great promise as powerful tools for understanding dynamic computation in biologically realistic neural circuits. In this talk I will discuss our progress in the training of recurrently connected spiking networks, the application of our training framework to neural population data and a novel interpretation of continuous neural signals that arises within the context of these models.
     Extending the iterative supervised learning algorithm of Sussillo & Abbott [2009], we have made several critical observations about the conditions necessary for successfully training recurrent spiking networks. Due to their impoverished short-term memory, multiple signals that form a “dynamically complete” basis must be trained simultaneously for successful training. I will illustrate this by showing a variety of examples of spiking neural networks replicating the dynamics of both autonomous and non-autonomous linear and non-linear continuous dynamical systems. Additionally, I will discuss recent efforts to incorporate a variety of network optimization constraints such that the learned connectivity matrices obey common constraints of biological networks, including sparsity and Dale’s Law. Finally, I will discuss our efforts to fit spiking models to population data from the isolated nervous system of the leech.
     Once trained, our models can be viewed as a low-dimensional, continuous dynamical system - traditionally modeled with firing-rate networks - embedded in a high-dimensional, spiking dynamical system. In light of this view, I will present a novel interpretation of firing-rate models and smoothly varying signals in general. Traditionally a continuous neural signal modeled as a “firing-rate unit” was a simplified representation of a pool of identical but noisy spiking neurons. In our formulation, each continuous neural signal represents an overlapping population of spiking neurons and is thus more akin to the multiple, continuous population trajectories one would uncover from experimental data via dimensionality reduction. By allowing these continuous signals to be constructed from overlapping pools of spiking neurons, our framework requires far fewer spiking neurons to arrive at the equivalent, traditional rate description.

Monday, January 19, 2015

Daniel Soudry: Jan 21st

Daniel Soudry will talk about the following paper:

Title: Fixed-form variational posterior approximation through stochastic linear regression

Authors:Tim Salimans and David A. Knowles

Abstract: We propose a general algorithm for approximating nonstandard Bayesian posterior distributions. The algorithm minimizes the Kullback-Leibler divergence of an approximating distribution to the intractable posterior distribution. Our method can be used to approximate any posterior distribution, provided that it is given in closed form up to the proportionality constant. The approximation can be any distribution in the exponential family or any mixture of such distributions, which means that it can be made arbitrarily precise. Several examples illustrate the speed and accuracy of our approximation method in practice.