Title: Bayesian learning methods for neural coding.
Abstract: A primary goal in systems neuroscience is to understand how neural spike responses encode information about the external world. A popular approach to this problem is to build an explicit probabilistic model that characterizes the encoding relationship in terms of a cascade of stages: (1) linear dimensionality reduction of a high-dimensional stimulus space using a bank of filters or receptive fields (RFs); (2) a nonlinear function from filter outputs to spike rate; and (3) a stochastic spiking process with recurrent feedback. These models have described single- and multi-neuron spike responses in a wide variety of brain areas.
In this talk, I will present my Ph.D. work that focuses on developing Bayesian methods to efficiently estimate the linear and non-linear stages of the cascade encoding model. First, I will describe a novel Bayesian receptive field estimator based on a hierarchical prior that flexibly incorporates knowledge about the shapes of neural receptive fields. This estimator achieves error rates several times lower than existing methods, and can be applied to a variety of other neural inference problems such as extracting structure in fMRI data. Furthermore, I will present a novel low-rank description of the high dimensional receptive field, combined with a hierarchical prior for more efficient receptive field estimation. Second, I will describe new models for neural nonlinearities using Gaussian processes (GPs) and Bayesian active learning algorithms in ``closed-loop" neurophysiology experiments to rapidly estimate neural nonlinearities. These approaches significantly improve the efficiency of neurophysiology experiments, where data are often limited by the difficulty of maintaining stable recordings from a neuron or neural population.
We meet on Wednesdays at 1pm, in the 10th floor conference room of the Statistics Department, 1255 Amsterdam Ave, New York, NY.
Sunday, October 27, 2013
Saturday, October 19, 2013
Prof. Tian Zheng: Oct 16th
Title: Latent Space Model for Aggregated Relational Data
Abstract: Aggregated Relational Data (ARD) are indirect network data collected using survey questions of the form "how many X's do you know?" It is most often used to estimate the size of populations that are difficult to count directly and allows researchers to choose specific subpopulations of interest without sampling or surveying members of these subpopulations directly. What has been under-utilized is the indirect information on social structure captured by ARD. In this talk, I present a latent space model and Bayesian computation framework for inference and estimation of social structures using ARD from non-network samples in social networks, the variation of social structures in subnetworks, and the relations between (hard-to-reach) subpopulations.
Abstract: Aggregated Relational Data (ARD) are indirect network data collected using survey questions of the form "how many X's do you know?" It is most often used to estimate the size of populations that are difficult to count directly and allows researchers to choose specific subpopulations of interest without sampling or surveying members of these subpopulations directly. What has been under-utilized is the indirect information on social structure captured by ARD. In this talk, I present a latent space model and Bayesian computation framework for inference and estimation of social structures using ARD from non-network samples in social networks, the variation of social structures in subnetworks, and the relations between (hard-to-reach) subpopulations.
Sunday, October 6, 2013
Prof. Rahul Mazumder: Oct 9th
Title: Low-rank Matrix Regularization: Statistical Models and Large Scale Algorithms
Abstract: Low-rank matrix regularization is an important area of research in statistics and machine learning with a wide range of applications --- the task is to estimate X, under a low rank constraint and possibly additional affine (or more general convex) constraints on X. In practice, the matrix dimensions frequently range from hundreds of thousands to even a million --- leading to severe computational challenges. In this talk, I will describe computationally tractable models and scalable (convex) optimization based algorithms for a class of low-rank regularized problems. Exploiting problem-specific statistical insights, problem structure and using novel tools for large scale SVD computations play important roles in this task. I will describe how we can develop a unified, tractable convex optimization framework for general exponential family models, incorporating meta-features on the rows/columns.
Abstract: Low-rank matrix regularization is an important area of research in statistics and machine learning with a wide range of applications --- the task is to estimate X, under a low rank constraint and possibly additional affine (or more general convex) constraints on X. In practice, the matrix dimensions frequently range from hundreds of thousands to even a million --- leading to severe computational challenges. In this talk, I will describe computationally tractable models and scalable (convex) optimization based algorithms for a class of low-rank regularized problems. Exploiting problem-specific statistical insights, problem structure and using novel tools for large scale SVD computations play important roles in this task. I will describe how we can develop a unified, tractable convex optimization framework for general exponential family models, incorporating meta-features on the rows/columns.
Subscribe to:
Posts (Atom)