This week, I'll be giving a fly-by overview of a string of recent papers on exact sparse signal recovery that do better than LASSO by solving a sequence of L1 or L2 penalized problems. Here is a basic narrative:
LASSO uses a penalty weighted by the same lambda for all coefficients. What happens if you assign different lambdas to each coefficient, and update these lambdas iteratively? Candes and Boyd do this in Enhancing sparsity by reweighted L1 minimization
You can obtain sparsity by iterative reweighting even for L2-penalized problems: if some of the lambdas become infinite, the corresponding coefficients become exactly zero. Chartrand and Yin find a particularly good L2 reweighing scheme in Iteratively reweighted algorithms for compressive sensing
All of the above methods reweigh each lambda based only on the value of its corresponding coefficient: they are separable. In Iterative reweighted l1 and l2 methods for finding sparse solutions, Wipf considers non-separable reweighting schemes that come out of Sparse Bayesian Learning (SBL), which you might also know by the name of Relevance Vector Machine or Automatic Revelance Determination.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.