tag:blogger.com,1999:blog-10805814661913718842020-10-16T03:05:23.269-04:00Computational Statistics and NeuroscienceWe meet on Wednesdays at 1pm, in the 10th floor conference room of the Statistics Department, 1255 Amsterdam Ave, New York, NY.
koliahttp://www.blogger.com/profile/00741968838423338876noreply@blogger.comBlogger163125tag:blogger.com,1999:blog-1080581466191371884.post-22912605277663697952015-09-29T16:28:00.003-04:002015-10-09T16:25:32.257-04:00Mayur Mudigonda: October 15thMayur Mudigonda is visiting from the Redwood Center at UC Berkeley. We will meet at 1pm on Thursday, October 15th, in room 502 NWC.<br /><br style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 12.8px;" /><span style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">Title: Hamiltonian Monte Carlo Without Detailed Balance</span><br /><br style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 12.8px;" /><span style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">Abstract:</span><br /><span style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">We present a method for performing Hamiltonian Monte Carlo that </span><span style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">largely eliminates sample rejection. In situations that would normally </span><span style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">lead to rejection, instead a longer trajectory is computed until a new </span><span style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">state is reached that can be accepted. This is achieved using Markov </span><span style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">chain transitions that satisfy the fixed point equation, but do not </span><span style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">satisfy detailed balance. The resulting algorithm significantly </span><span style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">suppresses the random walk behavior and wasted function evaluations </span><span style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">that are typically the consequence of update rejection. We demonstrate </span><span style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">a greater than factor of two improvement in mixing time on three test </span><span style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">problems. We release the source code as Python and MATLAB packages. </span><br /><span style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 12.8px;"><br /></span> <span style="color: #222222; font-family: arial, sans-serif;"><span style="background-color: white; font-size: 12.8px;">Link:</span></span><br /><a href="http://arxiv.org/abs/1409.5191" rel="noreferrer" style="background-color: white; color: #1155cc; font-family: arial, sans-serif; font-size: 12.8px;" target="_blank">http://arxiv.org/abs/1409.5191</a>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1080581466191371884.post-73820567147164333912015-08-28T12:51:00.001-04:002015-08-28T12:55:44.026-04:00 John Choi: September 3rdNote that this seminar will be at 10:30am on Thursday, not the usual lab Wednesday lab meeting time.<br /><br /><br /> Title: Optimal Control for Developing Somatosensory Neural Prosthetics<br /><br /><br />Abstract: Lost sensations, such as touch, could one day be restored by electrical or optogenetic stimulation along the sensory neural pathways. Used in conjunction with next-generation prosthetic limbs, this stimulation could artificially provide cutaneous and proprioceptive feedback to the user. Microstimulation of somatosensory brain regions has been shown to produce modality and place-specific percepts, and while psychophysical experiments in rats and primates have elucidated the range of perceptual sensitivities to certain stimulus parameters, not much work has been done for developing encoding models for translating mechanical sensor readings to microstimulation. Particularly, generating spatiotemporal patterns for explicitly evoking naturalistic neural activation has not yet been explored. We therefore approach the problem of building a sensory neural prosthesis by first modeling the dynamical input-output relationship between multichannel microstimulation and subsequent field potentials, and then optimizing the input pattern for evoking naturally occurring touch responses as closely as possible, while constraining inputs within safety bounds and the operating regime of our model. In my work, I focused on the hand regions of VPL thalamus and S1 cortex of anesthetized rats and showed that such optimization produces responses that are highly similar to their natural counterparts. The evoked responses also preserved most of the information of physical touch parameters such as amplitude and stimulus location. This suggests that such stimulus optimization approaches could be sufficient for restoring naturalistic levels of information transfer for an afferent neuroprosthetic.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1080581466191371884.post-36922871033245255852015-08-28T12:50:00.003-04:002015-08-28T12:50:18.955-04:00Josh Merel and Ari Pakman: August 19th This week Josh and Ari will regale us with tales from their adventures at the recent Deep Learning Summer School in Montreal. They'll discuss trends and highlights and provide pointers to some interesting ideas.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1080581466191371884.post-18776645145510619512015-08-28T12:49:00.002-04:002015-08-28T12:53:48.140-04:00Evan Archer: August 12thFor Wednesday's neurostat seminar I'll discuss three closely-related papers that appeared at ICML this year:<br /><br /> • <a href="http://jmlr.org/proceedings/papers/v37/rezende15.pdf">Variational Inference with Normalizing Flows </a><br /><br />• <a href="http://jmlr.org/proceedings/papers/v37/sohl-dickstein15.pdf">Deep Unsupervised Learning using Nonequilibrium Thermodynamics </a><br /><br />• <a href="http://jmlr.org/proceedings/papers/v37/salimans15.pdf">Markov Chain Monte Carlo and Variational Inference: Bridging the Gap </a>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1080581466191371884.post-405100377881890412015-08-02T13:41:00.003-04:002015-08-28T12:53:20.104-04:00Daniel Soudry: July 29thDaniel will discuss the following two papers, both concerning stochastic gradient Langevin dynamics:<br /><br /> • <a href="http://people.ee.duke.edu/~lcarin/sgnht-4.pdf">Bayesian Sampling Using Stochastic Gradient Thermostats </a><br /><br />• <a href="http://arxiv.org/abs/1506.04416">Dark Bayesian Knowledge </a>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1080581466191371884.post-34303369624045390392015-08-02T13:40:00.001-04:002015-09-29T16:31:15.177-04:00Kishore Kuchibhotla: June 17thTitle:<br />Synaptic and circuit logic of task engagement in auditory cortex<br /><br /><br />Abstract:<br />Animals can adjust their behavior based on immediate context. A pedestrian will move rapidly away from traffic if she hears a car honk while crossing a street – executing a learned sensorimotor response. The same honk heard by the same pedestrian will not elicit this response if she is seated on a nearby park bench. How do neural circuits enable this type of behavior and flexibly encode the same stimuli in different contexts? Here we dissect the natural activity patterns of the same auditory stimuli in different contexts and show that attentional demands of a behavioral task transform the input-output function in auditory cortex via cholinergic modulation and local inhibition. Mice were trained to perform a go/no-go operant task in response to pure tones in one context (“active context”) and listen to the same pure tones but execute no behavioral response in another context (“passive”). In the active context, tone-evoked responses of layer 2/3 auditory cortical neurons were broadly suppressed when compared to the passive context but a specific sub-network showed increased activity. Neural responses shifted within 1-2 trials after the context switched. Whole-cell voltage clamp recordings in behaving mice showed larger context-dependent changes in inhibition than excitation, and the two sets of inputs sometimes changed in opposing directions. Attentional demands appear to reduce the necessity of co-tuned synaptic inputs, an otherwise established requirement in passive brain states. Task engagement elevated tone-evoked responses in PV-positive interneurons and suppressed VIP-positive interneuron responses, implicating both in the context-dependent changes to layer 2/3 output. Global behavioral context, in this case the attentional demands in the active context, was relayed to the auditory cortex by the nucleus basalis, as revealed by axonal calcium imaging of NB cholinergic projections. Thus, local synaptic inhibition gates long-range cholinergic modulation from NB to rapidly alter auditory cortical output, temporarily removing the requirement of co-tuned excitatory and inhibitory inputs, and improving perceptual flexibility.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1080581466191371884.post-62570242789622373082015-05-17T13:58:00.000-04:002015-05-17T13:58:12.662-04:00Patrick Stinson: May 20thAbstract: I'll present Lindsten and Schoen's review of SMC-based backward simulation methods. The most immediate application of backward simulation is to address state smoothing problems in sequential models; however, this method can be generalized to non-Markovian latent variable models. Particle MCMC is a new method that incorporates SMC-based proposal schemes into MCMC algorithms. Backward simulation and a related method, ancestral sampling, can dramatically increase particle efficiency and mixing in this setting. Paper: "Backward Simulation Methods for Monte Carlo Statistical Inference" by Fredrik Lindsten and Thomas B. Schoen Link: http://users.isy.liu.se/en/rt/lindsten/publications/LindstenS_2013.pdfUnknownnoreply@blogger.com0tag:blogger.com,1999:blog-1080581466191371884.post-44540692226017510792015-05-17T13:53:00.001-04:002015-05-17T13:53:33.428-04:00Josh Merel: May 13th Josh will give a recap of interesting happenings from the recent <a href="http://www.iclr.cc/doku.php?id=iclr2015:main">International Conference on Learning Representations</a> (ICLR). Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1080581466191371884.post-65483802809355435902015-04-17T15:09:00.000-04:002015-04-17T15:09:21.709-04:00Dean Freestone: April 22ndTitle: Data-driven mesoscopic computational modeling Abstract: The talk will focus on two types of data-driven mesoscopic modeling. The first is known as neural field modeling, and the second neural mass modeling. It has been demonstrated that it is possible to estimate fast changing state variables (population firing rates or mean membrane potentials) and slowly changing parameters (connectivity strengths, time constants, and firing thresholds) from real electrophysiological data. The ability accurately estimate such quantities provides an opportunity to visualize aspects of brain function that is normally hidden when performing in-vivo studies. The talk will provide an update on efforts to improve and simplify estimation algorithms such that these ideas are more useful to the wider community. Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1080581466191371884.post-41614615595157179202015-03-17T13:55:00.002-04:002015-03-17T14:01:39.049-04:00Johannes Friedrich: April 1stTitle: Goal-directed decision making with spiking neurons <br/><br/>Abstract: Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, which requires extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way, and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a remarkably simple neural network to achieve optimal performance, and solves one-step decision making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision making tasks within a second. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, while the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision making tasks with multiple rewards.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1080581466191371884.post-86265187560507687242015-03-16T12:19:00.002-04:002015-03-16T12:19:37.906-04:00Scott Linderman: March 18thTitle: Discovering latent structure in neural spike trains with negative binomial generalized linear models <br/><br/>Abstract: The steady expansion of neural recording capability provides exciting opportunities to discover unexpected patterns and gain new insights into neural computation. Realizing these gains requires statistical methods for extracting interpretable structure from large-scale neural recordings. In this talk I will present our recent work on methods that reveal such structure in simultaneously recorded multi-neuron spike trains. We use generalized linear models (GLM’s) with negative-binomial observations, which provide a flexible model for spike trains. Interpretable properties such as latent cell types, features, and hidden states of the network are incorporated into the model as latent variables that mediate the functional connectivity of the GLM. We exploit recent innovations in negative binomial regression to perform efficient Bayesian inference using MCMC and variational methods. We apply our methods to neural recordings from primate retina and rat hippocampal place cells.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1080581466191371884.post-44433760273747728422015-02-20T13:24:00.002-05:002015-02-20T13:26:21.545-05:00Rajesh Ranganath: February 25thTitle: "Black Box Variational Inference" <br/><br/>Abstract: Variational inference has become a widely used method to approximate posteriors in complex latent variables models. However, deriving a variational inference algorithm generally requires significant model-specific analysis, and these efforts can hinder and deter us from quickly developing and exploring a variety of models for a problem at hand. We present a "black box" variational inference algorithm, one that can be quickly applied to many models with little additional derivation. Our method is based on a stochastic optimization of the variational objective where the noisy gradient is computed from Monte Carlo samples from the variational distribution. We develop a number of methods to reduce the variance of the gradient, always maintaining the criterion that we want to avoid difficult model-based derivations. We evaluate our method against the corresponding black box sampling based methods. We find that our method reaches better predictive likelihoods much faster than sampling methods. Finally, we demonstrate that Black Box Variational Inference lets us easily explore a wide space of models by quickly constructing and evaluating several models of longitudinal healthcare data. Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-1080581466191371884.post-22670422190223784062015-02-13T17:51:00.001-05:002015-02-13T17:52:40.039-05:00Josh Merel: Feb 18th<h2><br />ADADELTA and LSTMs</h2>We will discuss the adadelta paper<br /><a href="http://arxiv.org/pdf/1212.5701v1.pdf">http://arxiv.org/pdf/1212.5701v1.pdf</a><br /><br />and talk about LSTM layers. The original paper is (for reference): <a href="http://deeplearning.cs.cmu.edu/pdfs/Hochreiter97_lstm.pdf">http://deeplearning.cs.cmu.edu/pdfs/Hochreiter97_lstm.pdf</a><br />But we will probably discuss a more recent result using LSTM layers that uses a slightly different notation.<br /><br /><br />Anonymoushttp://www.blogger.com/profile/12311593796218071828noreply@blogger.com0tag:blogger.com,1999:blog-1080581466191371884.post-62302574356453181502015-01-29T09:51:00.000-05:002015-01-30T20:40:14.354-05:00Uygar Sümbül: Feb 4th<h2>Towards automated segmentation of neurons from Brainbow images</h2>A necessary step to learn how neural circuits account for<br />observed behaviors in health and disease is to map the connectivity of<br />individual neurons. Stochastic expression of fluorescent proteins with<br />different colors where individual cells express one of many<br />distinguishable hues – the Brainbow method – has generated striking<br />images of nervous tissues. However, its use has been limited. One<br />basic shortcoming has been the various noise sources in fluorescence<br />expression within individual neurons. Here, we propose a method to<br />automate the segmentation of neurons in Brainbow image stacks using<br />spectral clustering.<br /><div class="gs"><div class="gE iv gt"><table cellpadding="0" class="cf gJ"><tbody><tr class="acZ"><td class="gF gK"><table cellpadding="0" class="cf ix"><tbody><tr><td><h3 class="iw"> </h3></td></tr></tbody></table></td></tr></tbody></table></div> </div>Anonymoushttp://www.blogger.com/profile/12311593796218071828noreply@blogger.com0tag:blogger.com,1999:blog-1080581466191371884.post-51163836812562260902015-01-26T22:38:00.004-05:002015-01-26T22:39:22.382-05:00Brian DePasqual: Jan 28<h2>Embedding low-dimensional continuous dynamical systems in recurrently connected spiking neural networks</h2><div>Despite recent advances in training recurrently connected firing-rate networks, the application of supervised learning algorithms to biologically plausible recurrently connected spiking neural networks remains a challenge. Such models, when trained to directly replicate neural data, hold great promise as powerful tools for understanding dynamic computation in biologically realistic neural circuits. In this talk I will discuss our progress in the training of recurrently connected spiking networks, the application of our training framework to neural population data and a novel interpretation of continuous neural signals that arises within the context of these models.</div><div><div> Extending the iterative supervised learning algorithm of Sussillo & Abbott [2009], we have made several critical observations about the conditions necessary for successfully training recurrent spiking networks. Due to their impoverished short-term memory, multiple signals that form a “dynamically complete” basis must be trained simultaneously for successful training. I will illustrate this by showing a variety of examples of spiking neural networks replicating the dynamics of both autonomous and non-autonomous linear and non-linear continuous dynamical systems. Additionally, I will discuss recent efforts to incorporate a variety of network optimization constraints such that the learned connectivity matrices obey common constraints of biological networks, including sparsity and Dale’s Law. Finally, I will discuss our efforts to fit spiking models to population data from the isolated nervous system of the leech.</div><div> Once trained, our models can be viewed as a low-dimensional, continuous dynamical system - traditionally modeled with firing-rate networks - embedded in a high-dimensional, spiking dynamical system. In light of this view, I will present a novel interpretation of firing-rate models and smoothly varying signals in general. Traditionally a continuous neural signal modeled as a “firing-rate unit” was a simplified representation of a pool of identical but noisy spiking neurons. In our formulation, each continuous neural signal represents an overlapping population of spiking neurons and is thus more akin to the multiple, continuous population trajectories one would uncover from experimental data via dimensionality reduction. By allowing these continuous signals to be constructed from overlapping pools of spiking neurons, our framework requires far fewer spiking neurons to arrive at the equivalent, traditional rate description.</div></div>Anonymoushttp://www.blogger.com/profile/12311593796218071828noreply@blogger.com0tag:blogger.com,1999:blog-1080581466191371884.post-36759595750238228972015-01-19T18:30:00.002-05:002015-01-21T15:14:20.968-05:00Daniel Soudry: Jan 21stDaniel Soudry will talk about the following paper:<br /><br /><strong>Title:</strong> <a href="http://projecteuclid.org/euclid.ba/1386166315">Fixed-form variational posterior approximation through stochastic linear regression</a><br /><br /><strong>Authors:</strong>Tim Salimans and David A. Knowles<br /><br /><strong>Abstract:</strong> We propose a general algorithm for approximating nonstandard Bayesian posterior distributions. The algorithm minimizes the Kullback-Leibler divergence of an approximating distribution to the intractable posterior distribution. Our method can be used to approximate any posterior distribution, provided that it is given in closed form up to the proportionality constant. The approximation can be any distribution in the exponential family or any mixture of such distributions, which means that it can be made arbitrarily precise. Several examples illustrate the speed and accuracy of our approximation method in practice. Anonymoushttp://www.blogger.com/profile/12311593796218071828noreply@blogger.com0tag:blogger.com,1999:blog-1080581466191371884.post-85858140556777693822014-12-10T11:34:00.003-05:002014-12-10T11:37:39.591-05:00Moritz Deger: Dec 17th<h2> Dynamics and estimation of large-scale spiking neuronal network models</h2>Computations in neural circuits emerge from the interaction of many neurons. Although accurate single neuron models exist, little is known about the synaptic connectivity of large-scale circuits of neurons. However, recent experimental techniques enable recordings of the activity of thousands of neurons in parallel. Such data hold the promise that inferences about the underlying synaptic connectivity can be made. I will present two complementary approaches to gain insights into the collective dynamics of neural circuits. <br />On the one hand, I will report on recent progress in the reconstruction of networks of 1000 simulated spiking neurons by maximum likelihood estimation of a generalized linear model (GLM), in which a million possible synaptic efficacies have to be determined. The work demonstrates that reconstructing the connectivity of thousands of neurons is feasible, and that hidden embedded subpopulations can be detected in the reconstructed connectivity.<br /><br />On the other hand, spiking GLM's are a versatile class of single neuron models that can represent important features such as intrinsic stochasticity, refractoriness, and spike-frequency adaptation. For this class of models, I will present a new theory of population dynamics, which takes into account finite-size fluctuations and accurately describes the population-averaged neural activity in time, in response to arbitrary stimuli. Based on this theory, GLM network models with parameters extracted from data can be used to understand neural processing in realistic networks, and circuit-level information processing can be explained.Anonymoushttp://www.blogger.com/profile/12311593796218071828noreply@blogger.com0tag:blogger.com,1999:blog-1080581466191371884.post-38262814874556977752014-12-10T11:33:00.000-05:002014-12-16T20:58:35.532-05:00Post-NIPS special discussion: Dec 16th 2-3:30pmAnonymoushttp://www.blogger.com/profile/12311593796218071828noreply@blogger.com0tag:blogger.com,1999:blog-1080581466191371884.post-59401269788166945812014-11-27T14:40:00.002-05:002014-11-27T14:43:08.595-05:00Ferran Diego: Dec 3rd<h2> <span style="color: #222222; font-family: "Arial",sans-serif; font-size: large; line-height: 107%; mso-ansi-language: EN-US; mso-bidi-language: HE; mso-fareast-font-family: "Times New Roman"; mso-fareast-language: EN-US;">Identifying Neuronal Activity from Calcium Imaging Sequences</span></h2><span style="color: #222222; font-family: "Arial",sans-serif; mso-fareast-font-family: "Times New Roman";"></span><br /><span style="color: #222222; font-family: "Arial",sans-serif; mso-fareast-font-family: "Times New Roman";">Calcium imagin</span><span style="color: #222222; font-family: "Arial",sans-serif; mso-fareast-font-family: "Times New Roman";">g is an increasingly popular technique for monitoring simultaneously the neuronal activity of hundreds of cells at single cell resolution. This makes an essential tool for studying spatio-temporal patterns of distributed activity that are crucial determinants of behavioral and cognitive functions such as perception, memory formation, motor activity, decision making and emotion. However, most approaches only focus on identifying positions of each cell (or parts of cells) by eye or semi-automated. Therefore, in this talk, we present two main approaches for detecting automatically neural activity. The first approach formulates the identification of neuronal activity of single cells and the neuronal co-activation into the same framework. That is, we propose a decomposition of the matrix of observations into a product of more than two sparse matrices, with the rank decreasing from lower to higher levels, driven by the semantic concepts of pixel -> neuron -> assembly in the neurosciences image sequence. In contrast to prior work, we allow for both hierarchical and heterarchical relations of lower-level to higher-level concepts.<br />Moreover, the proposed bilevel SHMF (sparse heterarchical matrix factorization) is the first formalism that allows to simultaneously interpret a calcium imaging sequence in terms of the constituent neurons, their membership in assemblies, and the time courses of both neurons and assemblies. The second approach describes a unified formulation and algorithm to find an extremely sparse representation for Calcium image sequences in terms of cell locations, cell shapes, spike timings and impulse responses. Solution of a single optimization problem yields cell segmentations and activity estimates that are on par with the state of the art, without the need for heuristic pre- or postprocessing."<o:p></o:p></span>Anonymoushttp://www.blogger.com/profile/12311593796218071828noreply@blogger.com0tag:blogger.com,1999:blog-1080581466191371884.post-12844304226073357472014-11-15T21:07:00.001-05:002014-11-17T19:06:28.868-05:00Franck Polleux: Nov 19th<h2>The talk was canceled</h2>Anonymoushttp://www.blogger.com/profile/12311593796218071828noreply@blogger.com0tag:blogger.com,1999:blog-1080581466191371884.post-55982688410726513402014-11-05T15:58:00.001-05:002014-11-05T15:59:33.276-05:00Ethan S. Bromberg-Martin: Nov 12th<h2>What does information seeking tell us about reinforcement learning?</h2>Conventional theories of reinforcement learning explain how we choose actions to gain rewards, but we also often choose actions to help us predict rewards. This behavior is known as information seeking (or 'early resolution of uncertainty') in economics and a form of ‘observing behavior’ in psychology, and is found in both humans and animals. We recently showed that the preference to gather information about future rewards is signaled by many of the same neurons that signal preferences for appetitive rewards like food and water. This suggests that information seeking and conventional reward seeking share a common neural mechanism. <br />At the moment, we know very little about the nature of these neural computations. A major roadblock is theoretical: most prominent theories of reinforcement learning were originally designed to account for appetitive reward seeking and are unable to account for information seeking. How can we address this gap in our theories? I will summarize the state of the field, including my own work and that of others, and use this to propose ways that we can revise current theories of reinforcement learning to account for information seeking.Anonymoushttp://www.blogger.com/profile/12311593796218071828noreply@blogger.com0tag:blogger.com,1999:blog-1080581466191371884.post-8938877987901491962014-11-03T15:16:00.000-05:002014-11-05T07:51:08.445-05:00Sundeep Rangan: November 5th<h2>Approximate Message Passing for Inference in Generalized Linear Models</h2>Generalized approximate message passing (GAMP) methods are a powerful new class of inference algorithms designed for generalized linear models, where an input vector x must be estimated from a noisy, possibly nonlinear function of a transform z = Ax. The methods are based on Gaussian approximations of loopy BP and have the benefit of being computationally extremely simple and general. Moreover, under certain large random transforms, the algorithms are provably Bayes optimal, even in many non-convex problem instances. In this talk, I will provide an overview of GAMP methods, some of the recent extensions to unknown priors and structured uncertainty. I will also highlight some of the main issues in convergence of the algorithm and discuss some applications in neural connectivity detection from calcium imaging.<br /><br /><strong>Bio:</strong><br />Dr. Rangan received the B.A.Sc. at the University of Waterloo, Canada and the M.Sc. and Ph.D. at the University of California, Berkeley, all in Electrical Engineering. He has held postdoctoral appointments at the University of Michigan, Ann Arbor and Bell Labs. In 2000, he co-founded (with four others) Flarion Technologies, a spin off of Bell Labs, that developed Flash OFDM, the first cellular OFDM data system and pre-cursor to many 4G wireless technologies. In 2006, Flarion was acquired by Qualcomm Technologies. Dr. Rangan was a Director of Engineering at Qualcomm involved in OFDM infrastructure products. He joined the ECE department at the NYU Polytechnic School of Engineering in 2010. He is an IEEE Distinguished Lecturer of Vehicular Technology Society. His research interests are in wireless communications, signal processing, information theory and control theory.Anonymoushttp://www.blogger.com/profile/12311593796218071828noreply@blogger.com0tag:blogger.com,1999:blog-1080581466191371884.post-47488991923619516812014-10-20T01:55:00.000-04:002014-10-20T01:55:07.645-04:00Will Fithian: October 22nd<div><b>Optimal Inference After Model Selection</b></div><div><br />To perform inference after model selection, we propose controlling the selective type I error; i.e., the error rate of a test given that it was performed. By doing so, we recover long-run frequency properties among selected hypotheses analogous to those that apply in the classical (non-adaptive) context. Our proposal is closely related to data splitting and has a similar intuitive justification, but is more powerful. Exploiting the classical theory of Lehmann and Scheffe (1955), we derive most powerful unbiased selective tests and confidence intervals for inference in exponential family models after arbitrary selection procedures. For linear regression, we derive new selective z-tests that generalize recent proposals for inference after model selection and improve on their power, and new selective t-tests that do not require knowledge of the error variance.</div><div><br /></div><div>This is joint work with Dennis Sun and Jonathan Taylor, available online at <a href="http://arxiv.org/abs/1410.2597" target="_blank">http://arxiv.org/abs/1410.<wbr></wbr>2597</a></div>Anonymoushttp://www.blogger.com/profile/12311593796218071828noreply@blogger.com0tag:blogger.com,1999:blog-1080581466191371884.post-87916656755018208182014-10-10T07:44:00.000-04:002014-10-10T07:45:42.502-04:00Dean Freestone: October 15th<h2>Data-Driven Mean Field Neural Modeling<br /><strong></strong></h2><strong>Abstract:</strong> This research provides an overview of new methods for functional brain mapping via a process of model inversion. By estimating parameters of a computational model, we demonstrate a method for tracking functional connectivity and other parameters that influence neural dynamics. The estimation results provide an imaging modality of neural processes that cannot be directly measured using electrophysiological measurements alone. <br />The method is based on approximating brain networks using an interconnected neural mass model. Neural mass models describe the functional activity of the brain from a top-down perspective, capturing particular important experimental phenomena. The models can be related to biology by lumped quantities, where for example, the resting-membrane potentials, reversal potentials and firing thresholds are all lumped into one parameter. The lumping of parameters is a result of a trade-off between biological realism, where insights into brain mechanisms can still be gained, and parsimony, where models can be inverted and fit to patient-specific data. <br />The ability to track the hidden aspects of neurophysiology will have a profound impact on the way we understand and treat epilepsy. For example, the framework will provide insights into seizure initiation and termination on a patient-specific basis. It will enable investigation into the effect a particular drug has on specific neural populations and connectivity structures using minimally invasive measurements.<br /><br /><strong>Bio:</strong> Dr Freestone is currently a Senior Research Fellow in the Department of Medicine for St. Vincent’s Hospital at the University of Melbourne, Australia, and Fulbright Post-Doctoral Scholar at Columbia University, USA. He has previously completed a Post-Doc position at the University of Melbourne in the NeuroEngineering Research Group. He completed his PhD at the University of Melbourne, Australia and the University of Edinburgh, UK. His work has focused on developing methods for epileptic seizure prediction and control.Anonymoushttp://www.blogger.com/profile/12311593796218071828noreply@blogger.com0tag:blogger.com,1999:blog-1080581466191371884.post-6137199641575168102014-10-03T07:07:00.004-04:002014-10-03T07:09:02.226-04:00Ran Rubin: October 8th<h2><br />Supervised Learning and Support Vectors for Deterministic Spiking Neurons</h2>To signal the onset of salient sensory features or execute well-timed motor sequences, neuronal circuits must transform streams of incoming spike trains into precisely timed firing. In this talk I will investigate the efficiency and fidelity with which neurons can perform such computations. I'll present a theory that characterizes the capacity of feedforward networks to generate desired spike sequences and discuss its results and implications. Additionally, I'll present the Finite Precision algorithm: a biologically plausible learning rule that allows feedforward and recurrent networks to learn multiple mappings between inputs and desired spike sequences with preassigned required precision. This framework can be applied to reconstruct synaptic weights from spiking activity. Time permitting, I'll present further theoretical developments that extend the concept of 'large-margin' to dynamical systems with event based outputs, such as spiking neural networks. These extensions allow us to define optimal solutions that implement the required input-output transformation in a robust manner and open the way for incorporating dynamic, non-linear, spatio-temporal integration through the use of the kernel method.Anonymoushttp://www.blogger.com/profile/12311593796218071828noreply@blogger.com0