I'll be presenting on work in progress in collaboration with Bijan Pesaran's group (Yan Wong, Mariana Vigeral, David Putrino) and Josh Merel on building a high degree-of-freedom brain-machine interface. I'll focus on the Bayesian paradigm for decoding, and two practical problems for pushing that paradigm beyond the commonly-used Kalman filtering approach: building better likelihoods, and building better priors. The first amounts to fitting tuning curves for various neurons. Other groups have shown a nonlinear dependence of firing rate on hand position in 3D space, here I will show some preliminary results on fitting tuning curves for large numbers of joint angles. The second amounts to building better generative models of reach and grasp motions. As a first step in that direction, I've looked at PCA and ICA for reducing the dimension of reach-and-grasp signals.
We meet on Wednesdays at 1pm, in the 10th floor conference room of the Statistics Department, 1255 Amsterdam Ave, New York, NY.
Tuesday, March 27, 2012
David Pfau: March 27th (at 5PM)
Monday, March 19, 2012
Gustavo Lacerda: March 20th
Title: spatial regularization
Consider modeling each neuron as a 2-parameter logistic model (spiking probability as a function of stimulus intensity), and suppose we perform independent experiments on each neuron. Now imagine that the data isn't very informative, so we need to regularize our estimates. We can do spatial regularization by adding a quadratic penalty on the difference of estimates for nearby neurons. Now, suppose that there are *two* types of neurons, and that you only want to shrink together neurons of the same type. We don't want our estimate to be influenced by "false neighbors", i.e. neurons that are spatially close but of a different type. We discuss how to optimize this model. Finally, we explore the idea of Fused Group Lasso.
Consider modeling each neuron as a 2-parameter logistic model (spiking probability as a function of stimulus intensity), and suppose we perform independent experiments on each neuron. Now imagine that the data isn't very informative, so we need to regularize our estimates. We can do spatial regularization by adding a quadratic penalty on the difference of estimates for nearby neurons. Now, suppose that there are *two* types of neurons, and that you only want to shrink together neurons of the same type. We don't want our estimate to be influenced by "false neighbors", i.e. neurons that are spatially close but of a different type. We discuss how to optimize this model. Finally, we explore the idea of Fused Group Lasso.
Subscribe to:
Posts (Atom)