Title: spatial regularization
Consider modeling each neuron as a 2-parameter logistic model (spiking probability as a function of stimulus intensity), and suppose we perform independent experiments on each neuron. Now imagine that the data isn't very informative, so we need to regularize our estimates. We can do spatial regularization by adding a quadratic penalty on the difference of estimates for nearby neurons. Now, suppose that there are *two* types of neurons, and that you only want to shrink together neurons of the same type. We don't want our estimate to be influenced by "false neighbors", i.e. neurons that are spatially close but of a different type. We discuss how to optimize this model. Finally, we explore the idea of Fused Group Lasso.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.