Tuesday, February 21, 2012

Kamiar Rahnama Rad: Feb. 21

Two following questions will be discussed:  1. How does embedding low dimensional structures in high dimensional spaces decreases the learning complexity significantly? I will consider the simplest model, that is a linear transformation with additive noise.  2. Modern datasets are accumulated (and in some cases even stored) in a distributed or decentralized manner. Can distributed algorithms be designed to fit a global model over such datasets while retaining the performance of centralized estimators?   
The talk will be based on the following two papers: 
http://www.columbia.edu/~kr2248/papers/ieee-sparse.pdf  
http://www.columbia.edu/~kr2248/papers/CDC2010-1.pdf

Monday, February 13, 2012

Bryan Conroy: Fed. 14th

Bryan Conroy will talk about a fast method for computing many related l2-regularized logistic regression problems, and about possible extensions to other GLMs, and l1-regularizers.

Friday, February 3, 2012

Eftychios P.: Jan. 31st and Feb. 7th

I am planning to lead a very informal discussion on some neat techniques for convex and semidefinite relaxation that can be used to transform intractable optimization problems into approximate but convex ones. I'll also discuss a few applications to statistical neuroscience that we are currently pursuing.

Some background material (although I'm not planning to go over any of these in detail) includes:

http://www.se.cuhk.edu.hk/~manchoso/papers/sdrapp-SPM.pdf
http://arxiv.org/abs/1012.0621
http://www-stat.stanford.edu/~candes/papers/PhaseRetrieval.pdf
http://users.cms.caltech.edu/~jtropp/papers/MT11-Two-Proposals-EJS.pdf