Wednesday, December 10, 2014

Moritz Deger: Dec 17th

Dynamics and estimation of large-scale spiking neuronal network models

Computations in neural circuits emerge from the interaction of many neurons. Although accurate single neuron models exist, little is known about the synaptic connectivity of large-scale circuits of neurons. However, recent experimental techniques enable recordings of the activity of thousands of neurons in parallel. Such data hold the promise that inferences about the underlying synaptic connectivity can be made. I will present two complementary approaches to gain insights into the collective dynamics of neural circuits.
On the one hand, I will report on recent progress in the reconstruction of networks of 1000 simulated spiking neurons by maximum likelihood estimation of a generalized linear model (GLM), in which a million possible synaptic efficacies have to be determined. The work demonstrates that reconstructing the connectivity of thousands of neurons is feasible, and that hidden embedded subpopulations can be detected in the reconstructed connectivity.

On the other hand, spiking GLM's are a versatile class of single neuron models that can represent important features such as intrinsic stochasticity, refractoriness, and spike-frequency adaptation. For this class of models, I will present a new theory of population dynamics, which takes into account finite-size fluctuations and accurately describes the population-averaged neural activity in time, in response to arbitrary stimuli. Based on this theory, GLM network models with parameters extracted from data can be used to understand neural processing in realistic networks, and circuit-level information processing can be explained.

Post-NIPS special discussion: Dec 16th 2-3:30pm

Thursday, November 27, 2014

Ferran Diego: Dec 3rd

Identifying Neuronal Activity  from Calcium Imaging Sequences


Calcium imaging is an increasingly popular technique for monitoring simultaneously the neuronal activity of hundreds of cells at single cell resolution. This makes an essential tool for studying spatio-temporal patterns of distributed activity that are crucial determinants of behavioral and cognitive functions such as perception, memory formation, motor activity, decision making and emotion. However, most approaches only focus on identifying positions of each cell (or parts of cells) by eye or semi-automated. Therefore, in this talk, we present two main approaches for detecting automatically neural activity. The first approach formulates the identification of neuronal activity of single cells and the neuronal co-activation into the same framework. That is, we propose a decomposition of the matrix of observations into a product of more than two sparse matrices, with the rank decreasing from lower to higher levels, driven by the semantic concepts of pixel -> neuron -> assembly in the neurosciences image sequence. In contrast to prior work, we allow for both hierarchical and heterarchical relations of lower-level to higher-level concepts.
Moreover, the proposed bilevel SHMF (sparse heterarchical matrix factorization) is the first formalism that allows to simultaneously interpret a calcium imaging sequence in terms of the constituent neurons, their membership in assemblies, and the time courses of both neurons and assemblies.  The second approach describes a unified formulation and algorithm to find an extremely sparse representation for Calcium image sequences in terms of cell locations, cell shapes, spike timings and impulse responses. Solution of a single optimization problem yields cell segmentations and activity estimates that are on par with the state of the art, without the need for heuristic pre- or postprocessing."

Saturday, November 15, 2014

Franck Polleux: Nov 19th

The talk was canceled

Wednesday, November 5, 2014

Ethan S. Bromberg-Martin: Nov 12th

What does information seeking tell us about reinforcement learning?

Conventional theories of reinforcement learning explain how we choose actions to gain rewards, but we also often choose actions to help us predict rewards. This behavior is known as information seeking (or 'early resolution of uncertainty') in economics and a form of ‘observing behavior’ in psychology, and is found in both humans and animals. We recently showed that the preference to gather information about future rewards is signaled by many of the same neurons that signal preferences for appetitive rewards like food and water. This suggests that information seeking and conventional reward seeking share a common neural mechanism.
At the moment, we know very little about the nature of these neural computations. A major roadblock is theoretical: most prominent theories of reinforcement learning were originally designed to account for appetitive reward seeking and are unable to account for information seeking. How can we address this gap in our theories? I will summarize the state of the field, including my own work and that of others, and use this to propose ways that we can revise current theories of reinforcement learning to account for information seeking.

Monday, November 3, 2014

Sundeep Rangan: November 5th

Approximate Message Passing for Inference in Generalized Linear Models

Generalized approximate message passing (GAMP) methods are a powerful new class of inference algorithms designed for generalized linear models, where an input vector x must be estimated from a noisy, possibly nonlinear function of a transform z = Ax.  The methods are based on Gaussian approximations of loopy BP and have the benefit of being computationally extremely simple and general. Moreover, under certain large random transforms, the algorithms are provably Bayes optimal, even in many non-convex problem instances. In this talk, I will provide an overview of GAMP methods, some of the recent extensions to unknown priors and structured uncertainty. I will also highlight some of the main issues in convergence of the algorithm and discuss some applications in neural connectivity detection from calcium imaging.

Bio:
Dr. Rangan received the B.A.Sc. at the University of Waterloo, Canada and the M.Sc. and Ph.D. at the University of California, Berkeley, all in Electrical Engineering.  He has held postdoctoral appointments at the University of Michigan, Ann Arbor and Bell Labs.  In 2000, he co-founded (with four others) Flarion Technologies, a spin off of Bell Labs, that developed Flash OFDM, the first cellular OFDM data system and pre-cursor to many 4G wireless technologies.  In 2006, Flarion was acquired by Qualcomm Technologies.  Dr. Rangan was a Director of Engineering at Qualcomm involved in OFDM infrastructure products.  He joined the ECE department at the NYU Polytechnic School of Engineering in 2010.  He is an IEEE Distinguished Lecturer of Vehicular Technology Society. His research interests are in wireless communications, signal processing, information theory and control theory.

Monday, October 20, 2014

Will Fithian: October 22nd

Optimal Inference After Model Selection

To perform inference after model selection, we propose controlling the selective type I error; i.e., the error rate of a test given that it was performed. By doing so, we recover long-run frequency properties among selected hypotheses analogous to those that apply in the classical (non-adaptive) context. Our proposal is closely related to data splitting and has a similar intuitive justification, but is more powerful. Exploiting the classical theory of Lehmann and Scheffe (1955), we derive most powerful unbiased selective tests and confidence intervals for inference in exponential family models after arbitrary selection procedures. For linear regression, we derive new selective z-tests that generalize recent proposals for inference after model selection and improve on their power, and new selective t-tests that do not require knowledge of the error variance.

This is joint work with Dennis Sun and Jonathan Taylor, available online at http://arxiv.org/abs/1410.2597

Friday, October 10, 2014

Dean Freestone: October 15th

Data-Driven Mean Field Neural Modeling

Abstract: This research provides an overview of new methods for functional brain mapping via a process of model inversion. By estimating parameters of a computational model, we demonstrate a method for tracking functional connectivity and other parameters that influence neural dynamics. The estimation results provide an imaging modality of neural processes that cannot be directly measured using electrophysiological measurements alone.
The method is based on approximating brain networks using an interconnected neural mass model. Neural mass models describe the functional activity of the brain from a top-down perspective, capturing particular important experimental phenomena. The models can be related to biology by lumped quantities, where for example, the resting-membrane potentials, reversal potentials and firing thresholds are all lumped into one parameter. The lumping of parameters is a result of a trade-off between biological realism, where insights into brain mechanisms can still be gained, and parsimony, where models can be inverted and fit to patient-specific data.
The ability to track the hidden aspects of neurophysiology will have a profound impact on the way we understand and treat epilepsy. For example, the framework will provide insights into seizure initiation and termination on a patient-specific basis. It will enable investigation into the effect a particular drug has on specific neural populations and connectivity structures using minimally invasive measurements.

Bio: Dr Freestone is currently a Senior Research Fellow in the Department of Medicine for St. Vincent’s Hospital at the University of Melbourne, Australia, and Fulbright Post-Doctoral Scholar at Columbia University, USA. He has previously completed a Post-Doc position at the University of Melbourne in the NeuroEngineering Research Group. He completed his PhD at the University of Melbourne, Australia and the University of Edinburgh, UK. His work has focused on developing methods for epileptic seizure prediction and control.

Friday, October 3, 2014

Ran Rubin: October 8th


Supervised Learning and Support Vectors for Deterministic Spiking Neurons

To signal the onset of salient sensory features or execute well-timed motor sequences, neuronal circuits must transform streams of incoming spike trains into precisely timed firing. In this talk I will investigate the efficiency and fidelity with which neurons can perform such computations. I'll present a theory that characterizes the capacity of feedforward networks to generate desired spike sequences and discuss its results and implications. Additionally, I'll present the Finite Precision algorithm: a biologically plausible learning rule that allows feedforward and recurrent networks to learn multiple mappings between inputs and desired spike sequences with preassigned required precision. This framework can be applied to reconstruct synaptic weights from spiking activity. Time permitting, I'll present further theoretical developments that extend the concept of 'large-margin' to dynamical systems with event based outputs, such as spiking neural networks. These extensions allow us to define optimal solutions that implement the required input-output transformation in a robust manner and open the way for incorporating dynamic, non-linear, spatio-temporal integration through the use of the kernel method.

Tuesday, September 9, 2014

Roy Fox: September 24th

Optimal Selective Attention and Action in Reactive Agents


Intelligent agents, interacting with their environment, operate under constraints on what they can observe and how they can act. Unbounded agents can use standard Reinforcement Learning to optimize their inference and control under purely external constraints. Bounded agents, on the other hand, are subject to internal constraints as well. This only allows them to partially notice their observations, and to partially intend their actions, requiring rational selection of attention and action.

In this talk we will see how to find the optimal information-constrained policy in reactive (memoryless) agents. We will discuss a number of reasons why internal constraints are often best modeled as bounds on information-theoretic quantities, and why we can focus on reactive agents with hardly any loss of generality. We will link the solution of the constrained problem to that of soft clustering, and present some of its nice properties, such as principled dimensionality reduction.

Sunday, September 7, 2014

Søren Hauberg: September 10th

Grassmann Averages for Scalable Robust PCA 


As the collection of large datasets becomes increasingly
automated, the occurrence of outliers will increase --
or in terms of buzzwords: "big data implies big outliers".
While principal component analysis (PCA) is often used
to reduce the size of data, and scalable solutions exist,
it is well-known that outliers can arbitrarily corrupt
the results. Unfortunately, state-of-the-art approaches
for robust PCA do not scale beyond small-to-medium sized
datasets. To address this, we introduce the Grassmann
Average (GA), which expresses dimensionality reduction
as an average of the subspaces spanned by the data.
Because averages can be efficiently computed, we immediately
gain scalability. GA is inherently more robust than PCA,
but we show that they coincide for Gaussian data.
We exploit that averages can be made robust to formulate
the Robust Grassmann Average (RGA) as a form of robust PCA.
Robustness can be with respect to vectors (subspaces) or
elements of vectors; we focus on the latter and use a
trimmed average. The resulting Trimmed Grassmann Average
(TGA) is particularly appropriate for computer vision
because it is robust to pixel outliers.
The algorithm has low computational complexity and minimal
memory requirements, making it scalable to "big noisy data."
We demonstrate TGA for background modeling, video restoration,
and shadow removal. We show scalability by performing robust
PCA on the entire Star Wars IV movie; a task beyond any
currently existing method.

Work in collaboration with Aasa Feragen (DIKU) and Michael
J. Black (MPI-IS).

Friday, August 29, 2014

Brian London: September 2nd

Characterizing the oscillatory neural dynamics of voluntary motion

Monday, July 7, 2014

Daniel Soudry: July 9th

We will discuss the following book chapter:

F. Bach, R. Jenatton, J. Mairal and G. Obozinski. Convex optimization with sparsity-inducing norms. In S. Sra, S. Nowozin, S. J. Wright., editors, Optimization for Machine Learning, MIT Press, 2011.

http://www.di.ens.fr/~fbach/opt_book.pdf

Monday, June 30, 2014

Vamsi Krishna Potluru: July 2nd

Efficient Sparse NMF for fMRI data analysis

Nonnegative matrix factorization (NMF) has become a ubiquitous tool for data analysis. An important variant is the sparse NMF problem which arises when we explicitly require the learnt features to be sparse. A natural measure of sparsity is the L0 norm, however its optimization is NP-hard. Mixed norms, such as L1/L2 measure, have been shown to model sparsity robustly, based on intuitive attributes that such measures need to satisfy. This is in contrast to computationally cheaper alternatives such as the plain L1 norm. However, present algorithms designed for optimizing the mixed norm L1/L2 are slow and other formulations for sparse NMF have been proposed such as those based on L1 and L0 norms. Our proposed algorithm allows us to solve the mixed norm sparsity constraints while not sacri ficing computation time. We present experimental evidence on real-world datasets that shows our new algorithm performs an order of magnitude faster compared to the current state-of-the-art solvers optimizing the mixed norm and is suitable for large-scale datasets [1]. Also, recently, its computational efficiency has been exploited for evaluating the sparse NMF model for fMRI analysis [2]. And the authors show that the sparse NMF model is competitive with other state-of-the-art matrix factorization methods such as ICA, sparse PCA and even restricted Boltzmann machines.

Links: 

Thursday, June 19, 2014

Winrich Freiwald: June 25th

Faces, Attention, and the Temporal Lobe

Friday, May 30, 2014

Josh Merel: June 4th

Josh will present about linear matrix inequalities and their relevance for control theory problems.

References:
"Linear Matrix Inequalities in System and Control Theory"
"Linear Controller Design: Limits of Performance" (both by Boyd).  

Sunday, April 27, 2014

Evan Archer: April 29th

Bayesian nonparametric methods for entropy estimation in spike data

Shannon’s entropy is a basic quantity in information theory, and a useful tool for the analysis of neural codes. However, estimating entropy from data is a difficult statistical problem. In this talk, I will discuss the problem of estimating entropy in the “under-sampled regime”, where the number of samples is small relative to the number of symbols. Dirichlet and Pitman-Yor processes provide tractable priors over countably-infinite discrete distributions, and have found applications in Bayesian non-parametric statistics and machine learning. In this talk, I will show that they also provide natural priors for Bayesian entropy estimation. These nonparametric priors permit us to address two major issues with previously-proposed Bayesian entropy estimators: their dependence on knowledge of the total number of symbols, and their inability to account for the heavy-tailed distributions which abound in biological and other natural data. What’s more, by “centering” a Dirichlet Process over a flexible parametric model, we are able to develop Bayesian estimators for the entropy of binary spike trains using priors designed to flexibly exploit the statistical structure of simultaneously-recorded spike responses. Finally, in applications to simulated and real neural data, I'll show that these estimators perform well in comparison to traditional methods.

Thursday, April 24, 2014

Maurizio Filippone: 30th April

Pseudo-Marginal Bayesian Inference for Gaussian Processes

Statistical models where parameters have a hierarchical structure are commonly employed to flexibly model complex phenomena and to gain some insight into the functioning of the system under study.
Carrying out exact parameter inference for such models, which is key to achieve a sound quantification of uncertainty in parameter estimates and predictions, usually poses a number of computational challenges. In this talk, I will focus on Markov chain Monte Carlo (MCMC) based inference for hierarchical models involving Gaussian Process (GP) priors and non-Gaussian likelihood functions.
After discussing why MCMC is the only way to infer parameters "exactly" in general GP models and pointing out the challenges in doing so, I will present a practical and efficient alternative to popular MCMC reparameterization techniques based on the so called Pseudo-Marginal MCMC approach.
In particular, the Pseudo-Marginal MCMC approach yields samples from the exact posterior distribution over GP covariance parameters, but only requires an unbiased estimate of the analytically intractable marginal likelihood. Finally, I will present ways to construct unbiased estimates of the marginal likelihood in GP models, and conclude the talk by presenting results on several benchmark data and on a multi-class multiple-kernel classification problem with neuroimaging data.


Useful links

http://www.dcs.gla.ac.uk/~maurizio/index.html
http://arxiv.org/abs/1310.0740
http://arxiv.org/abs/1311.7320
http://www.dcs.gla.ac.uk/~maurizio/Publications/aoas12.pdf
http://www.dcs.gla.ac.uk/~maurizio/Publications/ml13.pdf

Saturday, April 19, 2014

Patrick J. Wolfe: April 23rd

Nonparametric estimation of network structure

Networks are a key conceptual tool for analysis of rich data structures, yielding meaningful summaries in the biological as well as other sciences.  As datasets become larger, however, the interpretation of network-based summaries becomes more challenging.  A natural next step in this context is to think of modeling a network nonparametrically -- and here we will show how such an approach is possible, both in theory and in practice.  As with a histogram, nonparametric models can fully represent variation in a network, without presupposing a particular set of motifs or other distributional forms.  Advantages and limitations of the approach will be discussed, along with open problems at the methodological frontier of statistical network analysis.  Joint work with David Choi (http://arxiv.org/abs/1212.4093) and Sofia Olhede (http://arxiv.org/abs/1309.5936/, http://arxiv.org/abs/1312.5306/).

Friday, April 11, 2014

Uygar Sümbül: April 16th

Submicron precision in the retina: classifying the cell types of the brain

The importance of cell types in understanding brain function is widely appreciated but only a tiny fraction of neuronal diversity has been catalogued. Here, we exploit recent progress in genetic definition of cell types in an objective structural approach to neuronal classification. The approach is based on highly accurate quantification of dendritic arbor position relative to neurites of other cells. We test the method on a population of 363 mouse retinal ganglion cells. For each cell, we determine the spatial distribution of the dendritic arbors, or "arbor density" with reference to arbors of an abundant, well-defined interneuronal type. The arbor densities are sorted into a number of clusters that is set by comparison with several molecularly defined cell types. The algorithm reproduces the genetic classes that are pure types, and detects six newly clustered cell types that await genetic definition.

Thursday, April 3, 2014

Rishidev Chaudhuri: April 9th

Timescales and the large-scale organization of cortical dynamics

In the first part of this talk I will present results from a model of 29 interacting areas in the macaque cortex. We built this model by combining quantitative data on long-range projections between cortical areas with an estimate of the strength of excitatory connections within an area. These anatomical constraints naturally give rise to a hierarchy of timescales in network activity: early sensory areas respond in a moment-to-moment fashion, allowing them to track a changing environment, while cognitive areas show long timescales, potentially providing the substrate for information integration and flexible decision-making. We characterize the dependence of this hierarchy on local and long-range anatomical properties and show the existence of multiple dynamical hierarchies subserved by the same anatomical structure. The model thus demonstrates how variations in anatomical properties across the cortex can produce dynamical and functional specialization in timescales of response.

I will then describe a network model for the temporal structure of human ECoG dynamics. We find the power spectra of ECoG recordings are well-described by the output of a randomly-connected linear dynamical network with net excitatory interactions between nodes. The architecture predicts that slow fluctuations show long-range spatial correlations and that decorrelation of inputs to a network could account for observed changes in ECoG power spectra upon task initiation. It also predicts that networks with strongly local connectivity should produce power spectra that show "1/f" behavior at low-frequencies. This analysis provides mechanistic insight into emergent network dynamics, links observed changes in power spectra to particular reconfigurations of the network and could help characterize differences between cortical regions, states and subjects.

Saturday, March 29, 2014

Johanni Brea: April 2nd

Abstract:

Part I. Sequence learning with hidden neurons in spiking neural networks

Storing and recalling spiking sequences is a general problem the brain needs to solve. It is however unclear what type of biologically plausible learning rule is suited to learn a wide class of spatio-temporal activity patterns in a robust way. We consider a recurrent network of stochastic spiking neurons composed of both  visible and hidden neurons. We derive a generic learning rule that is matched to the neural dynamics by minimizing an upper-bound on the Kullback-Leibler divergence from the target distribution to the model distribution. The derived learning rule is consistent with Spike-Timing Dependent Plasticity in that a presynaptic spike preceding a postsynaptic spike elicits potentiation while otherwise depression emerges. The learning rule for synapses that target hidden neurons is modulated by a global factor that can be seen as an internally computed reward signal.

Part II. Forgetting in the fruit fly: bug or feature?

Recent experiments revealed that the fruit fly Drosophila melanogaster has a dedicated mechanism for forgetting: blocking the G-protein Rac leads to slower and activating Rac to faster forgetting. This active form of forgetting lacks a satisfactory functional explanation. We investigated optimal decision making for an agent adapting to a stochastic environment where a stimulus may switch between being indicative of reward or punishment. Like Drosophila, an optimal agent shows forgetting with a rate that is linked to the time scale of changes in the environment. Moreover, to reduce the odds of missing future reward, an optimal agent may trade the risk of immediate pain for information gain and thus forget faster after aversive conditioning. A simple neuronal network reproduces these features. Our model supports the view that forgetting is adaptive rather than a consequence of limitations of the memory system.

Thursday, March 20, 2014

Sharmodeep Bhattacharyya: March 26th

Title
Statistical Inference of Features of Networks

Abstract
Analysis of stochastic models of networks is quite important in light of the huge influx of network data in social, information and bio sciences. But a proper statistical analysis of features of different stochastic models of networks is still underway. We follow the nonparametric model proposed by Bickel and Chen (PNAS, 2009) and investigate the statistical properties of local features of the networks generated from such models. We consider subsampling bootstrap methods for finding empirical distribution of count features or `moments' (Bickel, Chen and Levina, AoS, 2011) (such as number of triangles) and smooth functions of these moments for the networks. Using these methods, we can not only estimate variance of count features but also get good estimates of such feature counts, which are usually expensive to compute numerically in large networks. We derive theoretical properties of the bootstrap estimates of the count features as well as show their efficacy through simulation. We also investigate the behavior of a histogram estimate of a canonical version of the function characterizing the nonparametric model. Lastly, we use the methods on some real network data to answer qualitative questions on the networks. 

Sunday, March 16, 2014

Daniel Soudry: March 19th

Title: Mean Field Bayes Backpropagation: parameter-free training of multilayer neural networks with real and discrete weights

Abstract:
Recently, Multilayer Neural Networks (MNNs) have been trained to achieve state-of-the-art results in many classification tasks. The usual goal of the training is to estimate the parameters of a MNN, its weights, so they minimize some cost function. In theory, given a cost function, the optimal estimate can be found using their posterior given the data, which can be updated through Bayes theorem. In practice, this Bayesian approach is intractable. To circumvent this problem, we approximate the posterior using a factorized distribution and the central limit theorem. The resulting Mean Field Bayes BackPropagation algorithm is very similar to the standard Backpropagation algorithm. However, it has several advantages: (1) Training is parameter-free, given initial conditions (prior) and the MNN architecture. This is useful for large-scale problems, where parameter tuning is major challenge. Testing the algorithm numerically on MNIST, it achieves the same performance level as BackPropagation with the optimal constant learning rate. (2) The weights can be restricted to have discrete values. This is especially useful for implementing trained MNNs in precision limited hardware chips. This can improve their speed and energy efficiency by several orders of magnitude, thus enabling their integration into small and low-power electronic devices. We show that on MNIST, the algorithm can be used to train MNNs with binary weights with only mild reduction in performance - in contrast to weight quantization, which significantly increases the error.

Sunday, March 9, 2014

Amy Orsborn: March 12th

Title: Exploring decoder and neural adaptation in brain-machine interfaces

Abstract:
Brain-machine interfaces (BMIs) show great promise for restoring motor function to patients with motor disabilities, but significant improvements in performance are needed before they will be clinically viable. Moreover, BMIs must ultimately provide long-term performance that can be used in a variety of settings. One key challenge is to improve performance such that it can be maintained for long-term use in the varied activities of daily life. BMI creates an artificial, closed-loop control system, where the subject actively contributes to performance by volitional modulation of neural activity. In this talk, I will discuss experimental work in non-human primates exploring closed-loop design of BMI, which exploit the closed-loop and adaptive properties of BMI to improve performance and reliability. I will present a closed-loop decoder adaptation (CLDA) algorithm that can rapidly and reliably improve performance regardless of the initial decoding algorithm, which may be particularly useful for clinical applications with paralyzed patients. I will then show that this CLDA can be combined with neural adaptation to achieve and maintain skillful BMI performance across different tasks. Analyses of these data also suggests that brain-decoder interactions might be useful for shaping BMI performance. Finally, I will discuss emerging work exploring the selection of the neural signals for control and how it might influence closed-loop performance. 

Tuesday, February 18, 2014

Ran He: February 26th

Title: Estimation of Exponential Random Graph Models for Large Social Networks via Graph Limits

Abstract:
Analyzing and modeling network data have become increasingly important in a wide range of scientific fields. Among popular models, exponential random graph models (ERGMs) have been developed to study these complex networks. For large networks, however, maximum likelihood estimation (MLE) of parameters in these models can be very difficult, due to the unknown normalizing constant. Alternative strategies based on Markov chain Monte Carlo draw samples to approximate the likelihood, which is then maximized to obtain the MLE. These strategies have poor convergence due to model degeneracy issues. Chatterjee and Diaconis (2013) propose a new theoretical framework for estimating the parameters of ERGM by approximating the normalizing constant using the emerging tools in graph theory -- graph limits.

In this presentation, I will give a brief introduction of graph limits theorem as well as Chatterjee's theoretical framework. And I will also talk about our work, a complete computational procedure built upon their results with practical innovations. More specifically, we evaluate the likelihood via simple function approximation of the corresponding ERGM’s graph limit and iteratively maximize the likelihood to obtain the MLE. We also propose a new matching method to find a starting point for our iterative algorithm. Through simulation study and real data analysis of two large social networks, we show that our new method outperforms the MCMC-based method, especially when the network size is large.

Monday, February 10, 2014

David Pfau: February 19th

Title: Learning Dynamics and Identifying Neurons in Large Neural Populations

Abstract:
We are entering an age where scientists routinely record from thousands of neurons in a single experiment. Analyzing this data presents a challenge both for scaling existing algorithms and designing new ones suited to the increase in complexity. I will discuss two projects aimed at addressing these problems. First, I will discuss joint work with Eftychios Pnevmatikakis on learning low-dimensional dynamical systems with GLM outputs. Our approach combines a nuclear norm regularizer on the dimension of the state space with a generalized linear model output, which makes it possible to recover neural trajectories directly from unsmoothed spike trains, even in the presence of strong rectifying nonlinearities. Secondly, I will discuss joint work with Misha Ahrens and Jeremy Freeman on automatically identifying regions of interest (ROI) from whole-brain calcium recordings. We have developed a pipeline for ROI detection that scales to the very large datasets made possible by light-sheet microscopy that can run on a single GPU-enabled desktop. We automatically extract >2000 ROIs from whole-brain spontaneous activity in the larval zebrafish, which is to our knowledge the largest number of ROIs extracted from a single calcium imaging experiment via an activity-based fully automated method. Applying our nuclear-norm dimensionality reduction technique to the extracted firing rates, we find patterns of activity that more accurately reflect populations-level activity than PCA.

Monday, February 3, 2014

Michael Long: February 12th

Title: Understanding how motor sequences are represented in the brain: The search for a chronotopic map

Jeff Seely: February 5th

Title: State-space models for cortical-muscle transformations

Monday, January 27, 2014

Peter Orbanz: January 29th

Title: Nonparametric Bayesian models of graphs