Friday, May 24, 2013

Daryl Hochman: May 29th

Title: Optical Imaging Data Acquired From the Human Brain

Abstract: The amount of light absorbed and scattered by brain tissue is altered by neuronal activity. Imaging of “intrinsic optical signals” (ImIOS) is the technique of mapping these dynamic optical changes with high spatial and temporal resolution.  ImIOS of the exposed brains of awake patients, performed during their neurosurgical treatment for intractable epilepsy, has unique advantages for studying certain aspects of the human brain.  Better methods for the analysis and visualization of ImIOS data are motivated by at least two reasons.  First, ImIOS can be used for investigating basic biological questions concerning the regulation of blood flow in the human brain during normal and epileptic activity.  Second, optical imaging has the potential to be a practical clinical tool for localizing functional and epileptic brain regions in the operating room.  My talk will focus on explaining the types of questions that can be investigated with optical imaging of the human brain, and illustrating the spatial and temporal features of these types of data that could benefit from better methods for their visualization and analysis.

A couple of relevant references:
1) https://www.ncbi.nlm.nih.gov/pubmed/21640137
2) https://www.ncbi.nlm.nih.gov/pubmed/1495561


Wednesday, May 15, 2013


Lars Buesing: May 22nd

Title: Dynamical System Models for Characterizing Multi-Electrode Recordings of Cortical Population Activity

Abstract: Multi-electrode techniques now make it possible to record from up to hundreds of cortical neurons simultaneously, and thus open the door to unprecedented insights into cortical neural population activity and the associated computations. However, in order to exploit this potential we need computationally tractable statistical methods that are able to see beyond signal and variability in individual neurons to structured activity that underlies reliable population computation. Such methods will very likely depend on analyzing the activity of the ensemble as a whole, rather than on simple single-neuron or pairwise analysis. In this talk I will argue that Dynamical System models, and more specifically Linear Dynamical Systems with Poisson observations (PLDS), meet these desiderata, while at the same time providing a parsimonious, statistically accurate description of the data. I will present a fast, robust algorithm for fitting PLDS models, which is based on spectral subspace methods. This algorithm substantially improves over standard approximate Expectation-Maximization for PLDS models in terms of both computational efficiency as well as quality of estimated parameters, hence greatly facilitating the application of these models to real multi-electrode recordings. Finally, I will show how Dynamical System models can be used to characterize fundamental dynamical properties of multi-electrode recordings from motor areas of awake, behaving macaque monkeys.This analysis reveals that different epochs of task-relevant behavior manifest themselves in different dynamics of the recorded neural population.

Saturday, May 11, 2013


David Pfau: May 15th


Title: Robust Learning of Low-Dimensional Dynamics from Large Neural Ensembles

Abstract:  Progress in neural recording technology has made it possible to record spikes from ever larger populations of neurons. To cope with this deluge, a common strategy is to reduce the dimensionality of the data, most commonly by principal component analysis (PCA). In recent years a number of extensions to PCA have been introduced in the neuroscience literature, including jPCA and demixed principal component analysis (dPCA). A downside of these methods is that they do not treat either the discrete nature of spike data or the positivity of firing rates in a statistically principled way. In fact it is common practice to smooth the data substantially or average over many trials, losing information about fine temporal structure and inter-trial variability.

A more principled approach is to fit a state space model directly from spike data, where the latent state is low dimensional. Such models can account for the discreteness of spikes by using point-process models for the observations, and can incorporate temporal dependencies into the latent state model. State space models can include complex interactions such as switching linear dynamics and direct coupling between neurons. These methods have drawbacks too: they are typically fit by approximate EM or other methods that are prone to local minima, the number of latent dimensions must be chosen ahead of time (though nonparametric Bayesian models could avoid this issue) and a certain class of possible dynamics must be chosen before doing dimensionality reduction.

We attempt to combine the computational tractability of PCA and related methods with the statistical richness of state space models. Our approach is convex and based on recent advances in system identification using nuclear norm minimization, a relaxation of matrix rank minimization. Our contribution is threefold. 1) Low-dimensional subspaces can be accurately recovered, even when the dynamics are unknown and nonstationary. 2) Spectral methods can faithfully recover the parameters of state space models when applied to data projected into the recovered subspace. 3) Low-dimensional common inputs can be separated from sparse local interactions, suggesting that these techniques could be useful for inferring synaptic connectivity.

Thursday, May 2, 2013

Suraj Keshri: May 8th


Title: Inferring neural connectivity

Abstract: Advances in large-scale multineuronal recordings have made it possible to study the simultaneous activity of complete ensembles of neurons. These techniques in principle provide the opportunity to discern the architecture of neuronal networks. However, current technologies can sample only small fraction of the underlying circuitry, therefore unmeasured neurons probably have a large collective impact on network dynamics and coding properties For example, it is well understood that common input plays an essential role in the interpretation of pairwise cross-correlograms. To infer the correct connectivity and computations in the circuit requires modelling tools that account for unrecorded neurons. We develop a model for fast inference of neural connectivity under the constraint that we only observe a subset of neurons in the population at a time.