Tuesday, September 29, 2015

Mayur Mudigonda: October 15th

Mayur Mudigonda is visiting from the Redwood Center at UC Berkeley. We will meet at 1pm on Thursday, October 15th, in room 502 NWC.

Title: Hamiltonian Monte Carlo Without Detailed Balance

Abstract:
We present a method for performing Hamiltonian Monte Carlo that largely eliminates sample rejection. In situations that would normally lead to rejection, instead a longer trajectory is computed until a new state is reached that can be accepted. This is achieved using Markov chain transitions that satisfy the fixed point equation, but do not satisfy detailed balance. The resulting algorithm significantly suppresses the random walk behavior and wasted function evaluations that are typically the consequence of update rejection. We demonstrate a greater than factor of two improvement in mixing time on three test problems. We release the source code as Python and MATLAB packages. 

Link:
http://arxiv.org/abs/1409.5191

Friday, August 28, 2015

John Choi: September 3rd

Note that this seminar will be at 10:30am on Thursday, not the usual lab Wednesday lab meeting time.


 Title: Optimal Control for Developing Somatosensory Neural Prosthetics


Abstract: Lost sensations, such as touch, could one day be restored by electrical or optogenetic stimulation along the sensory neural pathways. Used in conjunction with next-generation prosthetic limbs, this stimulation could artificially provide cutaneous and proprioceptive feedback to the user. Microstimulation of somatosensory brain regions has been shown to produce modality and place-specific percepts, and while psychophysical experiments in rats and primates have elucidated the range of perceptual sensitivities to certain stimulus parameters, not much work has been done for developing encoding models for translating mechanical sensor readings to microstimulation. Particularly, generating spatiotemporal patterns for explicitly evoking naturalistic neural activation has not yet been explored. We therefore approach the problem of building a sensory neural prosthesis by first modeling the dynamical input-output relationship between multichannel microstimulation and subsequent field potentials, and then optimizing the input pattern for evoking naturally occurring touch responses as closely as possible, while constraining inputs within safety bounds and the operating regime of our model. In my work, I focused on the hand regions of VPL thalamus and S1 cortex of anesthetized rats and showed that such optimization produces responses that are highly similar to their natural counterparts. The evoked responses also preserved most of the information of physical touch parameters such as amplitude and stimulus location. This suggests that such stimulus optimization approaches could be sufficient for restoring naturalistic levels of information transfer for an afferent neuroprosthetic.

Josh Merel and Ari Pakman: August 19th

This week Josh and Ari will regale us with tales from their adventures at the recent Deep Learning Summer School in Montreal. They'll discuss trends and highlights and provide pointers to some interesting ideas.

Evan Archer: August 12th

For Wednesday's neurostat seminar I'll discuss three closely-related papers that appeared at ICML this year:

 • Variational Inference with Normalizing Flows 

Deep Unsupervised Learning using Nonequilibrium Thermodynamics 

Markov Chain Monte Carlo and Variational Inference: Bridging the Gap 

Sunday, August 2, 2015

Daniel Soudry: July 29th

Daniel will discuss the following two papers, both concerning stochastic gradient Langevin dynamics:

 • Bayesian Sampling Using Stochastic Gradient Thermostats 

Dark Bayesian Knowledge 

Kishore Kuchibhotla: June 17th

Title:
Synaptic and circuit logic of task engagement in auditory cortex


Abstract:
Animals can adjust their behavior based on immediate context. A pedestrian will move rapidly away from traffic if she hears a car honk while crossing a street – executing a learned sensorimotor response. The same honk heard by the same pedestrian will not elicit this response if she is seated on a nearby park bench. How do neural circuits enable this type of behavior and flexibly encode the same stimuli in different contexts? Here we dissect the natural activity patterns of the same auditory stimuli in different contexts and show that attentional demands of a behavioral task transform the input-output function in auditory cortex via cholinergic modulation and local inhibition. Mice were trained to perform a go/no-go operant task in response to pure tones in one context (“active context”) and listen to the same pure tones but execute no behavioral response in another context (“passive”). In the active context, tone-evoked responses of layer 2/3 auditory cortical neurons were broadly suppressed when compared to the passive context but a specific sub-network showed increased activity. Neural responses shifted within 1-2 trials after the context switched. Whole-cell voltage clamp recordings in behaving mice showed larger context-dependent changes in inhibition than excitation, and the two sets of inputs sometimes changed in opposing directions. Attentional demands appear to reduce the necessity of co-tuned synaptic inputs, an otherwise established requirement in passive brain states. Task engagement elevated tone-evoked responses in PV-positive interneurons and suppressed VIP-positive interneuron responses, implicating both in the context-dependent changes to layer 2/3 output. Global behavioral context, in this case the attentional demands in the active context, was relayed to the auditory cortex by the nucleus basalis, as revealed by axonal calcium imaging of NB cholinergic projections. Thus, local synaptic inhibition gates long-range cholinergic modulation from NB to rapidly alter auditory cortical output, temporarily removing the requirement of co-tuned excitatory and inhibitory inputs, and improving perceptual flexibility.

Sunday, May 17, 2015

Patrick Stinson: May 20th

Abstract: I'll present Lindsten and Schoen's review of SMC-based backward simulation methods. The most immediate application of backward simulation is to address state smoothing problems in sequential models; however, this method can be generalized to non-Markovian latent variable models. Particle MCMC is a new method that incorporates SMC-based proposal schemes into MCMC algorithms. Backward simulation and a related method, ancestral sampling, can dramatically increase particle efficiency and mixing in this setting. Paper: "Backward Simulation Methods for Monte Carlo Statistical Inference" by Fredrik Lindsten and Thomas B. Schoen Link: http://users.isy.liu.se/en/rt/lindsten/publications/LindstenS_2013.pdf

Josh Merel: May 13th

Josh will give a recap of interesting happenings from the recent International Conference on Learning Representations (ICLR).

Friday, April 17, 2015

Dean Freestone: April 22nd

Title: Data-driven mesoscopic computational modeling Abstract: The talk will focus on two types of data-driven mesoscopic modeling. The first is known as neural field modeling, and the second neural mass modeling. It has been demonstrated that it is possible to estimate fast changing state variables (population firing rates or mean membrane potentials) and slowly changing parameters (connectivity strengths, time constants, and firing thresholds) from real electrophysiological data. The ability accurately estimate such quantities provides an opportunity to visualize aspects of brain function that is normally hidden when performing in-vivo studies. The talk will provide an update on efforts to improve and simplify estimation algorithms such that these ideas are more useful to the wider community.

Tuesday, March 17, 2015

Johannes Friedrich: April 1st

Title: Goal-directed decision making with spiking neurons

Abstract: Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, which requires extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way, and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a remarkably simple neural network to achieve optimal performance, and solves one-step decision making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision making tasks within a second. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, while the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision making tasks with multiple rewards.

Monday, March 16, 2015

Scott Linderman: March 18th

Title: Discovering latent structure in neural spike trains with negative binomial generalized linear models

Abstract: The steady expansion of neural recording capability provides exciting opportunities to discover unexpected patterns and gain new insights into neural computation. Realizing these gains requires statistical methods for extracting interpretable structure from large-scale neural recordings. In this talk I will present our recent work on methods that reveal such structure in simultaneously recorded multi-neuron spike trains. We use generalized linear models (GLM’s) with negative-binomial observations, which provide a flexible model for spike trains. Interpretable properties such as latent cell types, features, and hidden states of the network are incorporated into the model as latent variables that mediate the functional connectivity of the GLM. We exploit recent innovations in negative binomial regression to perform efficient Bayesian inference using MCMC and variational methods. We apply our methods to neural recordings from primate retina and rat hippocampal place cells.

Friday, February 20, 2015

Rajesh Ranganath: February 25th

Title: "Black Box Variational Inference"

Abstract: Variational inference has become a widely used method to approximate posteriors in complex latent variables models. However, deriving a variational inference algorithm generally requires significant model-specific analysis, and these efforts can hinder and deter us from quickly developing and exploring a variety of models for a problem at hand. We present a "black box" variational inference algorithm, one that can be quickly applied to many models with little additional derivation. Our method is based on a stochastic optimization of the variational objective where the noisy gradient is computed from Monte Carlo samples from the variational distribution. We develop a number of methods to reduce the variance of the gradient, always maintaining the criterion that we want to avoid difficult model-based derivations. We evaluate our method against the corresponding black box sampling based methods. We find that our method reaches better predictive likelihoods much faster than sampling methods. Finally, we demonstrate that Black Box Variational Inference lets us easily explore a wide space of models by quickly constructing and evaluating several models of longitudinal healthcare data.

Friday, February 13, 2015

Josh Merel: Feb 18th


ADADELTA and LSTMs

We will discuss the adadelta paper
http://arxiv.org/pdf/1212.5701v1.pdf

and talk about LSTM layers. The original paper is (for reference): http://deeplearning.cs.cmu.edu/pdfs/Hochreiter97_lstm.pdf
But we will probably discuss a more recent result using LSTM layers that uses a slightly different notation.


Thursday, January 29, 2015

Uygar Sümbül: Feb 4th

Towards automated segmentation of neurons from Brainbow images

A necessary step to learn how neural circuits account for
observed behaviors in health and disease is to map the connectivity of
individual neurons. Stochastic expression of fluorescent proteins with
different colors where individual cells express one of many
distinguishable hues – the Brainbow method – has generated striking
images of nervous tissues. However, its use has been limited. One
basic shortcoming has been the various noise sources in fluorescence
expression within individual neurons. Here, we propose a method to
automate the segmentation of neurons in Brainbow image stacks using
spectral clustering.

 

 

Monday, January 26, 2015

Brian DePasqual: Jan 28

Embedding low-dimensional continuous dynamical systems in recurrently connected spiking neural networks

Despite recent advances in training recurrently connected firing-rate networks, the application of supervised learning algorithms to biologically plausible recurrently connected spiking neural networks remains a challenge. Such models, when trained to directly replicate neural data, hold great promise as powerful tools for understanding dynamic computation in biologically realistic neural circuits. In this talk I will discuss our progress in the training of recurrently connected spiking networks, the application of our training framework to neural population data and a novel interpretation of continuous neural signals that arises within the context of these models.
     Extending the iterative supervised learning algorithm of Sussillo & Abbott [2009], we have made several critical observations about the conditions necessary for successfully training recurrent spiking networks. Due to their impoverished short-term memory, multiple signals that form a “dynamically complete” basis must be trained simultaneously for successful training. I will illustrate this by showing a variety of examples of spiking neural networks replicating the dynamics of both autonomous and non-autonomous linear and non-linear continuous dynamical systems. Additionally, I will discuss recent efforts to incorporate a variety of network optimization constraints such that the learned connectivity matrices obey common constraints of biological networks, including sparsity and Dale’s Law. Finally, I will discuss our efforts to fit spiking models to population data from the isolated nervous system of the leech.
     Once trained, our models can be viewed as a low-dimensional, continuous dynamical system - traditionally modeled with firing-rate networks - embedded in a high-dimensional, spiking dynamical system. In light of this view, I will present a novel interpretation of firing-rate models and smoothly varying signals in general. Traditionally a continuous neural signal modeled as a “firing-rate unit” was a simplified representation of a pool of identical but noisy spiking neurons. In our formulation, each continuous neural signal represents an overlapping population of spiking neurons and is thus more akin to the multiple, continuous population trajectories one would uncover from experimental data via dimensionality reduction. By allowing these continuous signals to be constructed from overlapping pools of spiking neurons, our framework requires far fewer spiking neurons to arrive at the equivalent, traditional rate description.

Monday, January 19, 2015

Daniel Soudry: Jan 21st

Daniel Soudry will talk about the following paper:

Title: Fixed-form variational posterior approximation through stochastic linear regression

Authors:Tim Salimans and David A. Knowles

Abstract: We propose a general algorithm for approximating nonstandard Bayesian posterior distributions. The algorithm minimizes the Kullback-Leibler divergence of an approximating distribution to the intractable posterior distribution. Our method can be used to approximate any posterior distribution, provided that it is given in closed form up to the proportionality constant. The approximation can be any distribution in the exponential family or any mixture of such distributions, which means that it can be made arbitrarily precise. Several examples illustrate the speed and accuracy of our approximation method in practice.