David Pfau: May 15th
Title: Robust Learning of Low-Dimensional Dynamics from Large Neural Ensembles
Abstract: Progress in neural recording technology has made it possible to record spikes from ever larger populations of neurons. To cope with this deluge, a common strategy is to reduce the dimensionality of the data, most commonly by principal component analysis (PCA). In recent years a number of extensions to PCA have been introduced in the neuroscience literature, including jPCA and demixed principal component analysis (dPCA). A downside of these methods is that they do not treat either the discrete nature of spike data or the positivity of firing rates in a statistically principled way. In fact it is common practice to smooth the data substantially or average over many trials, losing information about fine temporal structure and inter-trial variability.
A more principled approach is to fit a state space model directly from spike data, where the latent state is low dimensional. Such models can account for the discreteness of spikes by using point-process models for the observations, and can incorporate temporal dependencies into the latent state model. State space models can include complex interactions such as switching linear dynamics and direct coupling between neurons. These methods have drawbacks too: they are typically fit by approximate EM or other methods that are prone to local minima, the number of latent dimensions must be chosen ahead of time (though nonparametric Bayesian models could avoid this issue) and a certain class of possible dynamics must be chosen before doing dimensionality reduction.
We attempt to combine the computational tractability of PCA and related methods with the statistical richness of state space models. Our approach is convex and based on recent advances in system identification using nuclear norm minimization, a relaxation of matrix rank minimization. Our contribution is threefold. 1) Low-dimensional subspaces can be accurately recovered, even when the dynamics are unknown and nonstationary. 2) Spectral methods can faithfully recover the parameters of state space models when applied to data projected into the recovered subspace. 3) Low-dimensional common inputs can be separated from sparse local interactions, suggesting that these techniques could be useful for inferring synaptic connectivity.