Friday, July 23, 2010

Deterministic particle filtering

No resampling, rejection, or importance sampling are used. Particles are propagated through time by numerically integrating an ODE. The method is very similar in spirit to Jascha Sohl-Dickstein, Peter Battaglino and Mike DeWeese's Minimum probability flow learning, but applied to nonlinear filtering.

The authors report orders of magnitude speedups for higher dimensional state spaces where sampling rejection would be a problem.

Particle flow for nonlinear filters with log-homotopy by Fred Daum & Jim Huang

There are a couple of papers companion to this one:
Nonlinear filters with particle flow induced by log-homotopy
Seventeen dubious methods to approximate the gradient for nonlinear filters with particle flow

As you may see, the authors have a very peculiar writing style.

However, one very recent paper by Lingji Chen and Raman Mehra points out some flaws in the approach:
A study of nonlinear filters with particle flow induced by log-homotopy
(but see the group meeting announcement above for Fred Daum and Jim Huang's recent answer to this).

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.