Sunday, June 12, 2011

Alex Ramirez: June 21th

Alex will be presenting a short version of this paper. In it the authors consider loss functions, for many estimators, that obey certain smoothness and convexity requirements and prove a global, geometric convergence (fast) rate of convergence under Nestorv's Gradient descent method up to a level of Statistical precision.

There will be no meeting on June 14th.

Monday, June 6, 2011

Kamiar Rahnama Rad: June 7th

Information rates and Optimal decoding in Large Populations

Many fundamental questions in theoretical neuroscience involve optimal decoding and the computation of Shannon information rates in populations of spiking neurons. In this paper, we apply methods from the asymptotic theory of statistical inference to obtain a clearer analytical understanding of these quantities. We find that for large neural populations carrying a finite total amount of information, the full spiking population response is asymptotically as informative as a single observation from a Gaussian process whose mean and covariance can be characterized explicitly in terms of network and single neuron properties. The Gaussian form of this asymptotic sufficient statistic allows us in certain cases to perform opti- mal Bayesian decoding by simple linear transformations, and to obtain closed-form expressions of the Shannon information carried by the network. One technical advantage of the theory is that it may be applied easily even to non-Poisson point process network models; for example, we find that under some conditions, neural populations with strong history-dependent (non-Poisson) effects carry exactly the same information as do simpler equivalent populations of non-interacting Poisson neurons with matched firing rates. We argue that our findings help to clarify some results from the recent literature on neural decoding and neuroprosthetic design.