Tuesday, February 21, 2012

Kamiar Rahnama Rad: Feb. 21

Two following questions will be discussed:  1. How does embedding low dimensional structures in high dimensional spaces decreases the learning complexity significantly? I will consider the simplest model, that is a linear transformation with additive noise.  2. Modern datasets are accumulated (and in some cases even stored) in a distributed or decentralized manner. Can distributed algorithms be designed to fit a global model over such datasets while retaining the performance of centralized estimators?   
The talk will be based on the following two papers: 
http://www.columbia.edu/~kr2248/papers/ieee-sparse.pdf  
http://www.columbia.edu/~kr2248/papers/CDC2010-1.pdf

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.