This Tuesday, on 2011/05/10, I will discuss methods for the acceleration of the convergence of algorithms with linear convergence near the fixed point, such as EM, which are known to be notoriously slow in that area. Two approaches are possible: modify the iterative algorithm itself (PX-EM (by parameter-expansion), ECM (by maximizing the maximizer individually for each parameter, keeping the others fixed), etc), or use the recent history of the iterations to extrapolate them closer to the fixed point (in which case you keep all your machinery intact and only plug in an auxiliary function for extrapolating the already computed iteration steps). I will talk about the second class of the accelerators.
I will start with the method I derived myself, which is visual, but powerful at the same time. I will then focus on two papers which seem to become the gold standard in the acceleration techniques: Varadhan, R. & Roland, C. "Simple and Globally Convergent Methods for Accelerating the Convergence of Any EM Algorithm" (dx.doi.org/10.1111/j.1467-9469.2007.00585.x) from 2008 and Zhou, H.; Alexander, D. & Lange, K. "A quasi-Newton acceleration for high-dimensional optimization algorithms" (dx.doi.org/10.1007/s11222-009-9166-3) from 2011. I have just found out about the second paper and it seems to overlap heavily with the method I derived. I hope we will clear this question up!
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.