Title: Bayesian learning methods for neural coding.
Abstract: A primary goal in systems neuroscience is to understand how neural spike responses encode information about the external world. A popular approach to this problem is to build an explicit probabilistic model that characterizes the encoding relationship in terms of a cascade of stages: (1) linear dimensionality reduction of a high-dimensional stimulus space using a bank of filters or receptive fields (RFs); (2) a nonlinear function from filter outputs to spike rate; and (3) a stochastic spiking process with recurrent feedback. These models have described single- and multi-neuron spike responses in a wide variety of brain areas.
In this talk, I will present my Ph.D. work that focuses on developing Bayesian methods to efficiently estimate the linear and non-linear stages of the cascade encoding model. First, I will describe a novel Bayesian receptive field estimator based on a hierarchical prior that flexibly incorporates knowledge about the shapes of neural receptive fields. This estimator achieves error rates several times lower than existing methods, and can be applied to a variety of other neural inference problems such as extracting structure in fMRI data. Furthermore, I will present a novel low-rank description of the high dimensional receptive field, combined with a hierarchical prior for more efficient receptive field estimation. Second, I will describe new models for neural nonlinearities using Gaussian processes (GPs) and Bayesian active learning algorithms in ``closed-loop" neurophysiology experiments to rapidly estimate neural nonlinearities. These approaches significantly improve the efficiency of neurophysiology experiments, where data are often limited by the difficulty of maintaining stable recordings from a neuron or neural population.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.