Showing posts with label GLM. Show all posts
Showing posts with label GLM. Show all posts

Wednesday, September 19, 2012

Eftychios P.: July 24th


Robust 1-bit compressed sensing and sparse logistic regression: A convex programming approach

This paper develops theoretical results regarding noisy 1-bit compressed sensing and sparse binomial regression. We show that a single convex program gives an accurate estimate of the signal, or coefficient vector, for both of these models. We demonstrate that an s-sparse signal in R^n can be accurately estimated from m = O(slog(n/s)) single-bit measurements using a simple convex program. This remains true even if each measurement bit is flipped with probability nearly 1/2. Worst-case (adversarial) noise can also be accounted for, and uniform results that hold for all sparse inputs are derived as well. In the terminology of sparse logistic regression, we show that O(slog(n/s)) Bernoulli trials are sufficient to estimate a coefficient vector in R^n which is approximately s-sparse. Moreover, the same convex program works for virtually all generalized linear models, in which the link function may be unknown. To our knowledge, these are the first results that tie together the theory of sparse logistic regression to 1-bit compressed sensing. Our results apply to general signal structures aside from sparsity; one only needs to know the size of the set K where signals reside. The size is given by the mean width of K, a computable quantity whose square serves as a robust extension of the dimension.

Tuesday, July 17, 2012

Alexandro Ramirez: June 5th

Title: Fast neural encoding model estimation via expected log-likelihoods 


Abstract 
Receptive fields are traditionally measured using the spike-triggered average (STA). Recent work has shown that the STA is a special case of a family of estimators derived from the “expected log-likelihood” of a Poisson model. We generalize these results to the broad class of neuronal response models known as generalized linear models (GLM).  We show that expected log-likelihoods can speed up by orders of magnitude computations involving the GLM log-likelihood, e.g parameter estimation, marginal likelihood calculations, etc., under some simple conditions on the priors and likelihoods involved.  Second, we perform a risk analysis, using both analytic and numerical methods, and show that the “expected log- likelihood” estimators come with a small cost in accuracy compared to standard MAP estimates.  When MAP accuracy is desired, we show that running a few pre-conditioned conjugate gradient iterations on the GLM log-likelihood initialized at the "expected log-likelihood" can lead to an estimator that is as accurate as the MAP. We use multi-unit, primate retinal responses to stimuli with naturalistic correlation to validate our findings.

Monday, December 19, 2011

David Pfau: Dec. 20th

David will be giving a fly-by view of a number of cool papers from NIPS.

First is Empirical Models of Spiking in Neural Populations by Macke, Büsing, Cunningham, Yu, Shenoy and Mahani, where they evaluate the relative merits of GLMs with pairwise coupling and state space models on multielectrode recording in motor cortex.

Next, Quasi-Newton Methods for Markov Chain Monte Carlo by Zhang and Sutton looks at how to use approximate second-order methods like L-BFGS for MCMC while still preserving detailed balance.

Then, Demixed Principal Component Analysis is an extension of PCA which demixes the dependence of different latent dimensions on different observed parameters, and is used to analyze neural data from PFC

Finally, Learning to Learn with Compound Hierarchical-Deep Models, which combines a deep neural network for learning visual features with a hierarchical nonparametric Bayesian model for learning object categories to make one cool-looking demo.

Wednesday, December 7, 2011

Ari Pakman: Dec. 13th

"Rescaling, thinning or complementing? On goodness-of-fit procedures for point process models and Generalized Linear Models" by Gerhard and Gerstner (NIPS 2010).

The abstract reads:

"Generalized Linear Models (GLMs) are an increasingly popular framework for modeling neural spike trains. They have been linked to the theory of stochastic point processes and researchers have used this relation to assess goodness-of-fit using methods from point-process theory, e.g. the time-rescaling theorem. However, high neural firing rates or coarse discretization lead to a breakdown of the assumptions necessary for this connection. Here, we show how goodness-of-fit tests from point-process theory can still be applied to GLMs by constructing equivalent surrogate point processes out of time-series observations. Furthermore, two additional tests based on thinning and complementing point processes are introduced. They augment the instruments available for checking model adequacy of point processes as well as discretized models."

Friday, July 23, 2010

Kolia Sadeghi : July 28

I will present work done with Liam, Jeff Gauthier and others in EJ Chichilnisky's lab on locating retinal cones from multiple ganglion cell recordings.  We write down a single hierarchical model where ganglion cell responses are modeled as independent GLMs with space-time-color separable filters and no spike history.  Assuming the stimulus was gaussian ensures that the ganglion cell Spike Triggered Averages are sufficient statistics.  The spatial component is then assumed to be a weighted sum of non-overlapping and appropriately placed archetypical cone receptive fields.  With a benign approximation, we can integrate out the weights and focus on doing MCMC in the space of cone locations and colors only.  As it turns out, this likelihood landscape has many nasty local maxima; we use parallel tempering and a few techniques specific to this problem to ensure ergodicity of the markov chain.

Doing a google scholar search on parallel tempering, also known as replica exchange, or just exchange Monte Carlo, will bring up many papers on this simple technique. Here is a review:
Parallel tempering: Theory, applications, and new perspectives