Thursday, September 30, 2010

Parallel computing : matlab on the HPC cluster

I've improved the codes for the parallel computing, which I talked about at the seminar a month ago - it should be really simple to use now :). Also, the problem with the atomic operation, at least under Linux, is solved as well now. I've written up a description of the codes, commented them and compiled two examples: one to be run on a single machine with several copies of Matlab running in parallel and another is for the HPC cluster. Everything can be found here:

If you have comments, suggestions - will be happy to hear! Will also be glad to help resolving problems, if they arise, or to explain the code, if needed. Also, if you start using the code, please let me know - it's always encouraging to know that the work goes to masses :).

Monday, September 27, 2010

Synapses with short-term plasticity are optimal estimators of presynaptic membrane potentials

possibly of interest - Synapses with short-term plasticity are optimal estimators of presynaptic membrane potentials

Chaitu Ekanadham : Sept. 29

Recovery of sparse transformation-invariant signals with continuous basis pursuit

We study the problem of signal decomposition where the signal is a noisy superposition of template features. Each template can occur multiple times in the signal,  and associated with each instance is an unknown amount of transformation that the template undergoes. The templates and transformation types are assumed to be known, but the number of instances and associated amounts of transformation with each must be recovered from the signal. In this setting, current methods construct a dictionary containing several transformed copies of each template and employ approximate methods to solve a sparse linear inverse problem. We propose to use a set of basis functions that can interpolate the template under any small amount of  transformation(s). Both the amplitude of the feature and the amount of transformation is encoded in the basis coefficients in a way depending on the interpolation scheme used. We construct a dictionary containing transformed copies of these basis functions, where the copies are spaced as far out as the interpolation is accurate. The coefficients are obtained by solving a constrained sparse linear inverse problem where the sparsity penalty is applied across, but not within these groups. We compare our method with standard basis pursuit on a sparse deconvolution task. We find that our method outperforms these methods in that they yield sparser solutions while still having lower reconstruction error.

Monday, September 20, 2010

Eizaburo Doi : Sept. 22

Title: Testing efficient coding for a complete and inhomogeneous neural population

The theory of efficient coding under the linear Gaussian model, originally formulated by Linsker (1989), Atick & Redlich (1990), and van Hateren (1992), is quite well-known.  However, its direct test with physiological data (a complete population of receptive fields) has been hampered in the past twenty years for two reasons:  a) There is no physiological data available.  b) The earlier models are too simplistic to compare with physiological data.

We resolve these two issues, and furthermore, we develop two novel methods to assess how the structures of the retinal transform match those of the theoretically derived, optimal transform.  The main conclusion of this study is that the retinal transform is at least 80% optimal, when evaluated with the linear-Gaussian model.

We also clarify the characteristics of the retinal transform that are and are not explained by the proposed model, and discuss the future directions and preliminary results along these lines.

This is a joint work with Jeff Gauthier, Greg Field, Alexander Sher, John Shlens, Martin Greschner, Tim Machado, Keith Mathieson, Deborah Gunning, Alan Litke, Liam Paninski, EJ Chichilnisky, and Eero Simoncelli.

Monday, September 13, 2010

Ana Calabrese: Sept. 15

This wednesday Ana will discuss a recent paper by Tkacik et al. on population coding by noisy spiking neurons, using maximum entropy models.

Here's a copy of the paper: