Title: Design and analysis of experiments in networks
Abstract: Random assignment of individuals to treatments is often used to predict what will happen if the treatment is applied to everyone, but resulting estimates can suffer substantial bias in the presence of peer effects (i.e., interference, spillovers, social interactions). We describe experimental designs that reduce this bias by producing treatment assignments that are correlated in the network. For example, we can use graph partitioning methods to construct clusters of individuals who are then assigned to treatment or control together. This clustered assignment alone can substantially reduce bias, as can incorporating information about peers' treatment assignments or behaviors into the analysis. Simulation results show how this bias reduction varies with network structure and the size of direct and peer effects. We illustrate this method with real experiments, including a large experiment on Thanksgiving Day 2012.
We meet on Wednesdays at 1pm, in the 10th floor conference room of the Statistics Department, 1255 Amsterdam Ave, New York, NY.
Friday, September 27, 2013
Sunday, September 22, 2013
Donald Pianto: September 25th
Title: Dealing with monotone likelihood in a model for speckled data
Abstract: In this paper we study maximum likelihood estimation (MLE) of the roughness parameter of the G_{A}^{0} distribution for speckled imagery (Frery et al., 1997). We discover that when a certain criterion is satisfied by the sample moments, the likelihood function is monotone and MLE estimates are infinite, implying an extremely homogeneous region. We implement three corrected estimators in an attempt to obtain finite parameter estimates. Two of the estimators are taken from the literature on monotone likelihood (Firth, 1993; Jeffreys, 1946) and one, based on resampling, is proposed by the authors. We perform Monte Carlo experiments to compare the three estimators. We find the estimator based on the Jeffreys prior to be the worst. The choice between Firth’s estimator and the Bootstrap
estimator depends on the value of the number of looks (which is given before estimation) and the specific needs of the user. We also apply the estimators to real data obtained from synthetic aperture radar (SAR). These results corroborate the Monte Carlo findings.
Abstract: In this paper we study maximum likelihood estimation (MLE) of the roughness parameter of the G_{A}^{0} distribution for speckled imagery (Frery et al., 1997). We discover that when a certain criterion is satisfied by the sample moments, the likelihood function is monotone and MLE estimates are infinite, implying an extremely homogeneous region. We implement three corrected estimators in an attempt to obtain finite parameter estimates. Two of the estimators are taken from the literature on monotone likelihood (Firth, 1993; Jeffreys, 1946) and one, based on resampling, is proposed by the authors. We perform Monte Carlo experiments to compare the three estimators. We find the estimator based on the Jeffreys prior to be the worst. The choice between Firth’s estimator and the Bootstrap
estimator depends on the value of the number of looks (which is given before estimation) and the specific needs of the user. We also apply the estimators to real data obtained from synthetic aperture radar (SAR). These results corroborate the Monte Carlo findings.
Sunday, September 15, 2013
Prof. John Paisley: September 18th
Title: Variational Inference and Big Data
Abstract: I will discuss a scalable algorithm for approximating posterior distributions called stochastic variational inference. Stochastic variational inference lets one apply complex Bayesian models to massive data sets. This technique applies to a large class of probabilistic models and outperforms traditional batch variational inference, which can only handle small data sets. Stochastic inference is a simple modification to the batch approach, so a significant part of the discussion will focus on reviewing this traditional batch inference method.
Abstract: I will discuss a scalable algorithm for approximating posterior distributions called stochastic variational inference. Stochastic variational inference lets one apply complex Bayesian models to massive data sets. This technique applies to a large class of probabilistic models and outperforms traditional batch variational inference, which can only handle small data sets. Stochastic inference is a simple modification to the batch approach, so a significant part of the discussion will focus on reviewing this traditional batch inference method.
Friday, September 6, 2013
David Carlson: September 11th
Title: Real-Time Inference for a Gamma Process Model of Neural Spiking
Abstract: With simultaneous measurements from ever increasing populations of neurons, there is a growing need for sophisticated tools to recover signals from individual neurons. In electrophysiology experiments, this classically proceeds in a two-step process: (i) threshold the waveforms to detect putative spikes and (ii) cluster the waveforms into single units (neurons). We extend previous Bayesian nonparametric models of neural spiking to jointly detect and cluster neurons using a Gamma process model. We develop an online approximate inference scheme enabling real-time analysis, with performance exceeding the previous state-of-the-art. Via exploratory data analysis we find several features of our model collectively contribute to our improved performance including: (i) accounting for colored noise, (ii) detecting overlapping spikes, (iii) tracking waveform dynamics, and (iv) using multiple channels.
In my talk, I will give a brief overview of the Bayesian nonparametric structures that have been used in the spike-sorting problem. From there, I will give details on how we've taken the spike sorting model and integrated it with a Poisson process to improve the noisy detection problem, and give details on learning the model using real-time online methods. Additionally, I will discuss extensions to evolving waveform dynamics and multiple channels, and present results from a tetrode as well as from novel 3-channel and 8-channel multi-electrode arrays where action potentials may appear on some but not all of the channels.
Abstract: With simultaneous measurements from ever increasing populations of neurons, there is a growing need for sophisticated tools to recover signals from individual neurons. In electrophysiology experiments, this classically proceeds in a two-step process: (i) threshold the waveforms to detect putative spikes and (ii) cluster the waveforms into single units (neurons). We extend previous Bayesian nonparametric models of neural spiking to jointly detect and cluster neurons using a Gamma process model. We develop an online approximate inference scheme enabling real-time analysis, with performance exceeding the previous state-of-the-art. Via exploratory data analysis we find several features of our model collectively contribute to our improved performance including: (i) accounting for colored noise, (ii) detecting overlapping spikes, (iii) tracking waveform dynamics, and (iv) using multiple channels.
In my talk, I will give a brief overview of the Bayesian nonparametric structures that have been used in the spike-sorting problem. From there, I will give details on how we've taken the spike sorting model and integrated it with a Poisson process to improve the noisy detection problem, and give details on learning the model using real-time online methods. Additionally, I will discuss extensions to evolving waveform dynamics and multiple channels, and present results from a tetrode as well as from novel 3-channel and 8-channel multi-electrode arrays where action potentials may appear on some but not all of the channels.
Subscribe to:
Posts (Atom)