Monday, October 20, 2014

Will Fithian: October 22nd

Optimal Inference After Model Selection

To perform inference after model selection, we propose controlling the selective type I error; i.e., the error rate of a test given that it was performed. By doing so, we recover long-run frequency properties among selected hypotheses analogous to those that apply in the classical (non-adaptive) context. Our proposal is closely related to data splitting and has a similar intuitive justification, but is more powerful. Exploiting the classical theory of Lehmann and Scheffe (1955), we derive most powerful unbiased selective tests and confidence intervals for inference in exponential family models after arbitrary selection procedures. For linear regression, we derive new selective z-tests that generalize recent proposals for inference after model selection and improve on their power, and new selective t-tests that do not require knowledge of the error variance.

This is joint work with Dennis Sun and Jonathan Taylor, available online at http://arxiv.org/abs/1410.2597

Friday, October 10, 2014

Dean Freestone: October 15th

Data-Driven Mean Field Neural Modeling

Abstract: This research provides an overview of new methods for functional brain mapping via a process of model inversion. By estimating parameters of a computational model, we demonstrate a method for tracking functional connectivity and other parameters that influence neural dynamics. The estimation results provide an imaging modality of neural processes that cannot be directly measured using electrophysiological measurements alone.
The method is based on approximating brain networks using an interconnected neural mass model. Neural mass models describe the functional activity of the brain from a top-down perspective, capturing particular important experimental phenomena. The models can be related to biology by lumped quantities, where for example, the resting-membrane potentials, reversal potentials and firing thresholds are all lumped into one parameter. The lumping of parameters is a result of a trade-off between biological realism, where insights into brain mechanisms can still be gained, and parsimony, where models can be inverted and fit to patient-specific data.
The ability to track the hidden aspects of neurophysiology will have a profound impact on the way we understand and treat epilepsy. For example, the framework will provide insights into seizure initiation and termination on a patient-specific basis. It will enable investigation into the effect a particular drug has on specific neural populations and connectivity structures using minimally invasive measurements.

Bio: Dr Freestone is currently a Senior Research Fellow in the Department of Medicine for St. Vincent’s Hospital at the University of Melbourne, Australia, and Fulbright Post-Doctoral Scholar at Columbia University, USA. He has previously completed a Post-Doc position at the University of Melbourne in the NeuroEngineering Research Group. He completed his PhD at the University of Melbourne, Australia and the University of Edinburgh, UK. His work has focused on developing methods for epileptic seizure prediction and control.

Friday, October 3, 2014

Ran Rubin: October 8th


Supervised Learning and Support Vectors for Deterministic Spiking Neurons

To signal the onset of salient sensory features or execute well-timed motor sequences, neuronal circuits must transform streams of incoming spike trains into precisely timed firing. In this talk I will investigate the efficiency and fidelity with which neurons can perform such computations. I'll present a theory that characterizes the capacity of feedforward networks to generate desired spike sequences and discuss its results and implications. Additionally, I'll present the Finite Precision algorithm: a biologically plausible learning rule that allows feedforward and recurrent networks to learn multiple mappings between inputs and desired spike sequences with preassigned required precision. This framework can be applied to reconstruct synaptic weights from spiking activity. Time permitting, I'll present further theoretical developments that extend the concept of 'large-margin' to dynamical systems with event based outputs, such as spiking neural networks. These extensions allow us to define optimal solutions that implement the required input-output transformation in a robust manner and open the way for incorporating dynamic, non-linear, spatio-temporal integration through the use of the kernel method.