Tuesday, October 23, 2012

Giovanni Motta: Oct. 24th

Fitting evolutionary factor models to multivariate EEG data

Current approaches for fitting stationary (dynamic) factor models to multivariate time series are based on principal components analysis of the covariance (spectral) matrix. These approaches are based on the assumption that the underlying process is temporally stationary which appears to be restrictive because, over long time periods, the parameters are highly unlikely to remain constant. Our alternative approach is to model the time-varying covariances (auto-covariances) via nonparametric estimation, which imposes very little structure on the moments of the underlying process. Because of identification issues, only parts of the model parameters are allowed to be time-varying. More precisely, we consider two specifications: First, the latent factors are stationary while the loadings are time-varying. Second, the latent factors admit a dynamic representation with time-varying autoregressive coefficients while the loadings are constant over time. Estimation of the model parameters is accomplished by application of evolutionary principal components and local polynomials. We illustrate our approach through applications to multichannel EEG data.

Ari Pakman: Oct. 10th

Ari will present the paper "The Horseshoe Estimator for Sparse Signals"

Abstract:
This paper proposes a new approach to sparse-signal detection called the horseshoe estimator. We show that the horseshoe is a close cousin of the lasso in that it arises from the same class of multivariate scale mixtures of normals, but that it is almost universally superior to the double-exponential prior at handling sparsity. A theoretical framework is proposed for understanding why the horseshoe is a better default “sparsity” estimator than those that arise from powered-exponential priors. Comprehensive numerical evidence is presented to show that the difference in performance can often be large. Most importantly, we show that the horseshoe estimator corresponds quite closely to the answers one would get if one pursued a full Bayesian model-averaging approach using a “two-groups” model: a point mass at zero for noise, and a continuous density for signals. Surprisingly, this correspondence holds both for the estimator itself and for the classification rule induced by a simple threshold applied to the estimator. We show how the resulting thresholded horseshoe can also be viewed as a novel Bayes multiple-testing procedure.

Alex Ramirez: Oct. 3rd

Alex will give a brief introduction to the method of generalized cross validation.


I'll give an introductory review on a method for model selection known as Generalized cross-validation (GCV) (see linked paper).  When dealing with linear predictor models, GCV provides a computationally convenient approximation to "leave-one-out" cross-validation.  I'll discuss this connection between cross-validation and GCV in more detail.  I'll then discuss attempts in the literature made at extending the idea behind GCV to more general models in the exponential family.  

Josh Merel: Sep. 26th

Josh will present two recent papers on unsupervised learning:
http://web.eecs.umich.edu/~honglak/cacm2011-researchHighlights-convDBN.pdf
http://research.google.com/pubs/pub38115.html