Tuesday, February 18, 2014

Ran He: February 26th

Title: Estimation of Exponential Random Graph Models for Large Social Networks via Graph Limits

Analyzing and modeling network data have become increasingly important in a wide range of scientific fields. Among popular models, exponential random graph models (ERGMs) have been developed to study these complex networks. For large networks, however, maximum likelihood estimation (MLE) of parameters in these models can be very difficult, due to the unknown normalizing constant. Alternative strategies based on Markov chain Monte Carlo draw samples to approximate the likelihood, which is then maximized to obtain the MLE. These strategies have poor convergence due to model degeneracy issues. Chatterjee and Diaconis (2013) propose a new theoretical framework for estimating the parameters of ERGM by approximating the normalizing constant using the emerging tools in graph theory -- graph limits.

In this presentation, I will give a brief introduction of graph limits theorem as well as Chatterjee's theoretical framework. And I will also talk about our work, a complete computational procedure built upon their results with practical innovations. More specifically, we evaluate the likelihood via simple function approximation of the corresponding ERGM’s graph limit and iteratively maximize the likelihood to obtain the MLE. We also propose a new matching method to find a starting point for our iterative algorithm. Through simulation study and real data analysis of two large social networks, we show that our new method outperforms the MCMC-based method, especially when the network size is large.

Monday, February 10, 2014

David Pfau: February 19th

Title: Learning Dynamics and Identifying Neurons in Large Neural Populations

We are entering an age where scientists routinely record from thousands of neurons in a single experiment. Analyzing this data presents a challenge both for scaling existing algorithms and designing new ones suited to the increase in complexity. I will discuss two projects aimed at addressing these problems. First, I will discuss joint work with Eftychios Pnevmatikakis on learning low-dimensional dynamical systems with GLM outputs. Our approach combines a nuclear norm regularizer on the dimension of the state space with a generalized linear model output, which makes it possible to recover neural trajectories directly from unsmoothed spike trains, even in the presence of strong rectifying nonlinearities. Secondly, I will discuss joint work with Misha Ahrens and Jeremy Freeman on automatically identifying regions of interest (ROI) from whole-brain calcium recordings. We have developed a pipeline for ROI detection that scales to the very large datasets made possible by light-sheet microscopy that can run on a single GPU-enabled desktop. We automatically extract >2000 ROIs from whole-brain spontaneous activity in the larval zebrafish, which is to our knowledge the largest number of ROIs extracted from a single calcium imaging experiment via an activity-based fully automated method. Applying our nuclear-norm dimensionality reduction technique to the extracted firing rates, we find patterns of activity that more accurately reflect populations-level activity than PCA.

Monday, February 3, 2014

Michael Long: February 12th

Title: Understanding how motor sequences are represented in the brain: The search for a chronotopic map

Jeff Seely: February 5th

Title: State-space models for cortical-muscle transformations