again
I will attempt to take the latest paper on the deterministic particle flow filter discussed in a previous blog post, and strip it down to the essentials. The authors present a more general, stable and improved version of their previous deterministic particle flow filter, supposedly. This paper is rife with ideas and peculiarly written; for a gentler introduction, please refer to the papers linked to in the previous blog post. Here is the paper:
Exact particle flow for nonlinear filters by Fred Daum, Jim Huang and Arjang Noushin,
Numerical experiments for nonlinear filters with exact particle flow induced by log-homotopy (companion paper)
We meet on Wednesdays at 1pm, in the 10th floor conference room of the Statistics Department, 1255 Amsterdam Ave, New York, NY.
Thursday, July 29, 2010
Tuesday, July 27, 2010
A Computationally Efficient Method for Nonparametric Modeling of Neural Spiking Activity with Point Processes
this would be interesting to go over at some point...
A Computationally Efficient Method for Nonparametric Modeling of
Neural Spiking Activity with Point Processes
Todd Coleman and Sridevi Sarma
A Computationally Efficient Method for Nonparametric Modeling of
Neural Spiking Activity with Point Processes
Todd Coleman and Sridevi Sarma
Friday, July 23, 2010
Deterministic particle filtering
No resampling, rejection, or importance sampling are used. Particles are propagated through time by numerically integrating an ODE. The method is very similar in spirit to Jascha Sohl-Dickstein, Peter Battaglino and Mike DeWeese's Minimum probability flow learning, but applied to nonlinear filtering.
The authors report orders of magnitude speedups for higher dimensional state spaces where sampling rejection would be a problem.
Particle flow for nonlinear filters with log-homotopy by Fred Daum & Jim Huang
There are a couple of papers companion to this one:
Nonlinear filters with particle flow induced by log-homotopy
The authors report orders of magnitude speedups for higher dimensional state spaces where sampling rejection would be a problem.
Particle flow for nonlinear filters with log-homotopy by Fred Daum & Jim Huang
There are a couple of papers companion to this one:
Nonlinear filters with particle flow induced by log-homotopy
Seventeen dubious methods to approximate the gradient for nonlinear filters with particle flow
As you may see, the authors have a very peculiar writing style.
However, one very recent paper by Lingji Chen and Raman Mehra points out some flaws in the approach:
A study of nonlinear filters with particle flow induced by log-homotopy
(but see the group meeting announcement above for Fred Daum and Jim Huang's recent answer to this).
As you may see, the authors have a very peculiar writing style.
However, one very recent paper by Lingji Chen and Raman Mehra points out some flaws in the approach:
A study of nonlinear filters with particle flow induced by log-homotopy
(but see the group meeting announcement above for Fred Daum and Jim Huang's recent answer to this).
Kolia Sadeghi : July 28
I will present work done with Liam, Jeff Gauthier and others in EJ Chichilnisky's lab on locating retinal cones from multiple ganglion cell recordings. We write down a single hierarchical model where ganglion cell responses are modeled as independent GLMs with space-time-color separable filters and no spike history. Assuming the stimulus was gaussian ensures that the ganglion cell Spike Triggered Averages are sufficient statistics. The spatial component is then assumed to be a weighted sum of non-overlapping and appropriately placed archetypical cone receptive fields. With a benign approximation, we can integrate out the weights and focus on doing MCMC in the space of cone locations and colors only. As it turns out, this likelihood landscape has many nasty local maxima; we use parallel tempering and a few techniques specific to this problem to ensure ergodicity of the markov chain.
Doing a google scholar search on parallel tempering, also known as replica exchange, or just exchange Monte Carlo, will bring up many papers on this simple technique. Here is a review:
Parallel tempering: Theory, applications, and new perspectives
Doing a google scholar search on parallel tempering, also known as replica exchange, or just exchange Monte Carlo, will bring up many papers on this simple technique. Here is a review:
Parallel tempering: Theory, applications, and new perspectives
Labels:
cones,
ergodicity,
ganglion cells,
GLM,
group meeting,
hierarchical model,
macaque,
MCMC,
parallel tempering,
retina
Thursday, July 22, 2010
Some interesting papers from AISTATS 2010
Here are a few potentially interesting papers from AISTATS this year. All pdf's available from
http://jmlr.csail.mit.edu/proceedings/papers/v9/ - in no particular order:
by Botond Cseke, Tom Heskes
by Lauren Hannah, David Blei, Warren Powell
by Jun Li, Dacheng Tao
by Mark Schmidt, Kevin Murphy
by Sajid Siddiqi, Byron Boots, Geoffrey Gordon
by Aarti Singh, Robert Nowak, Robert Calderbank
by Nikolai Slavov
by Bharath Sriperumbudur, Kenji Fukumizu, Gert Lanckriet
by Ryan Turner, Marc Deisenroth, Carl Rasmussen
by James Martens, Ilya Sutskever
by Jimmy Olsson, Jonas Strojby
by Steve Hanneke, Liu Yang
Labels:
AISTATS,
dirichlet process,
HMM,
latent gaussian,
MRF,
particle filtering,
RKHS,
sparse networks
Jittering spike trains carefully
Presented in lab meeting by Alex Ramirez on July 14th, 2010.
by Matthew Harrison and Stuart Geman
In it the authors describe an algorithm that takes a spike train and jitters the spike times to create a new spike train which is maximally random while preserving the firing rate and recent spike-history of the original train.
Sunday, July 18, 2010
Logan Grosenick : 3pm July 21
Logan Grosenick: Center for Mind, Brain, and Computation & Department of
Bioengineering, Stanford University.
TITLE: Fast classification, regression, and multivariate methods for
sparse but structured data with applications to whole-brain fMRI and
volumetric calcium imaging
ABSTRACT: Modern neuroimaging methods allow the rapid collection of
large (> 100,000 voxel) volumetric time-series. Consequently there has
been a growing interest in applying supervised (classification,
regression) and unsupervised (factor analytic) machine learning
methods to uncover interesting patterns in these rich data.
However, as classically formulated, such approaches are difficult to
interpret when fit to correlated, multivariate data in the presence of
noise. In such cases, these models may suffer from coefficient
instability and sensitivity to outliers, and typically return dense
rather than parsimonious solutions. Furthermore, on large data they
can take an unreasonably long time to compute.
I will discuss ongoing research in the area of sparse but structured
methods for classification, regression, and factor analysis that aim
to produce interpretable solutions and to incorporate realistic
physical priors in the face of large, spatially and temporally
correlated data. Two examples--whole-brain classification of
spatiotemporal fMRI data and nonnegative sparse PCA applied to 3D
calcium imaging--will be presented.
Bioengineering, Stanford University.
TITLE: Fast classification, regression, and multivariate methods for
sparse but structured data with applications to whole-brain fMRI and
volumetric calcium imaging
ABSTRACT: Modern neuroimaging methods allow the rapid collection of
large (> 100,000 voxel) volumetric time-series. Consequently there has
been a growing interest in applying supervised (classification,
regression) and unsupervised (factor analytic) machine learning
methods to uncover interesting patterns in these rich data.
However, as classically formulated, such approaches are difficult to
interpret when fit to correlated, multivariate data in the presence of
noise. In such cases, these models may suffer from coefficient
instability and sensitivity to outliers, and typically return dense
rather than parsimonious solutions. Furthermore, on large data they
can take an unreasonably long time to compute.
I will discuss ongoing research in the area of sparse but structured
methods for classification, regression, and factor analysis that aim
to produce interpretable solutions and to incorporate realistic
physical priors in the face of large, spatially and temporally
correlated data. Two examples--whole-brain classification of
spatiotemporal fMRI data and nonnegative sparse PCA applied to 3D
calcium imaging--will be presented.
Labels:
calcium imaging,
classification,
fMRI,
group meeting,
regression,
sparse PCA,
supervised,
unsupervised
Saturday, July 10, 2010
Alex Ramirez : July 14
I'll be presenting a paper from Matt Harrsion. In it the authors describe an algorithm that takes a spike train and jitters the spike times to create a new spike train which is maximally random while preserving the firing rate and recent spike-history of the original train.
Thursday, July 1, 2010
Carl Smith : July 8
We have developed a simple model neuron for inference on noisy spike trains. In particular, we have in mind to use this model for computationally tractable quantification of information loss due to spike-time jitter. I will introduce the model, and in particular its favorable scaling properties. I'll display some results from inference done on synthetic data. Lastly, I'll describe an efficient scheme we devised for inference with a particular class of priors on the stimulus space that could be interesting outside the context of this model.
Subscribe to:
Posts (Atom)