Saturday, March 29, 2014

Johanni Brea: April 2nd

Abstract:

Part I. Sequence learning with hidden neurons in spiking neural networks

Storing and recalling spiking sequences is a general problem the brain needs to solve. It is however unclear what type of biologically plausible learning rule is suited to learn a wide class of spatio-temporal activity patterns in a robust way. We consider a recurrent network of stochastic spiking neurons composed of both  visible and hidden neurons. We derive a generic learning rule that is matched to the neural dynamics by minimizing an upper-bound on the Kullback-Leibler divergence from the target distribution to the model distribution. The derived learning rule is consistent with Spike-Timing Dependent Plasticity in that a presynaptic spike preceding a postsynaptic spike elicits potentiation while otherwise depression emerges. The learning rule for synapses that target hidden neurons is modulated by a global factor that can be seen as an internally computed reward signal.

Part II. Forgetting in the fruit fly: bug or feature?

Recent experiments revealed that the fruit fly Drosophila melanogaster has a dedicated mechanism for forgetting: blocking the G-protein Rac leads to slower and activating Rac to faster forgetting. This active form of forgetting lacks a satisfactory functional explanation. We investigated optimal decision making for an agent adapting to a stochastic environment where a stimulus may switch between being indicative of reward or punishment. Like Drosophila, an optimal agent shows forgetting with a rate that is linked to the time scale of changes in the environment. Moreover, to reduce the odds of missing future reward, an optimal agent may trade the risk of immediate pain for information gain and thus forget faster after aversive conditioning. A simple neuronal network reproduces these features. Our model supports the view that forgetting is adaptive rather than a consequence of limitations of the memory system.

Thursday, March 20, 2014

Sharmodeep Bhattacharyya: March 26th

Title
Statistical Inference of Features of Networks

Abstract
Analysis of stochastic models of networks is quite important in light of the huge influx of network data in social, information and bio sciences. But a proper statistical analysis of features of different stochastic models of networks is still underway. We follow the nonparametric model proposed by Bickel and Chen (PNAS, 2009) and investigate the statistical properties of local features of the networks generated from such models. We consider subsampling bootstrap methods for finding empirical distribution of count features or `moments' (Bickel, Chen and Levina, AoS, 2011) (such as number of triangles) and smooth functions of these moments for the networks. Using these methods, we can not only estimate variance of count features but also get good estimates of such feature counts, which are usually expensive to compute numerically in large networks. We derive theoretical properties of the bootstrap estimates of the count features as well as show their efficacy through simulation. We also investigate the behavior of a histogram estimate of a canonical version of the function characterizing the nonparametric model. Lastly, we use the methods on some real network data to answer qualitative questions on the networks. 

Sunday, March 16, 2014

Daniel Soudry: March 19th

Title: Mean Field Bayes Backpropagation: parameter-free training of multilayer neural networks with real and discrete weights

Abstract:
Recently, Multilayer Neural Networks (MNNs) have been trained to achieve state-of-the-art results in many classification tasks. The usual goal of the training is to estimate the parameters of a MNN, its weights, so they minimize some cost function. In theory, given a cost function, the optimal estimate can be found using their posterior given the data, which can be updated through Bayes theorem. In practice, this Bayesian approach is intractable. To circumvent this problem, we approximate the posterior using a factorized distribution and the central limit theorem. The resulting Mean Field Bayes BackPropagation algorithm is very similar to the standard Backpropagation algorithm. However, it has several advantages: (1) Training is parameter-free, given initial conditions (prior) and the MNN architecture. This is useful for large-scale problems, where parameter tuning is major challenge. Testing the algorithm numerically on MNIST, it achieves the same performance level as BackPropagation with the optimal constant learning rate. (2) The weights can be restricted to have discrete values. This is especially useful for implementing trained MNNs in precision limited hardware chips. This can improve their speed and energy efficiency by several orders of magnitude, thus enabling their integration into small and low-power electronic devices. We show that on MNIST, the algorithm can be used to train MNNs with binary weights with only mild reduction in performance - in contrast to weight quantization, which significantly increases the error.

Sunday, March 9, 2014

Amy Orsborn: March 12th

Title: Exploring decoder and neural adaptation in brain-machine interfaces

Abstract:
Brain-machine interfaces (BMIs) show great promise for restoring motor function to patients with motor disabilities, but significant improvements in performance are needed before they will be clinically viable. Moreover, BMIs must ultimately provide long-term performance that can be used in a variety of settings. One key challenge is to improve performance such that it can be maintained for long-term use in the varied activities of daily life. BMI creates an artificial, closed-loop control system, where the subject actively contributes to performance by volitional modulation of neural activity. In this talk, I will discuss experimental work in non-human primates exploring closed-loop design of BMI, which exploit the closed-loop and adaptive properties of BMI to improve performance and reliability. I will present a closed-loop decoder adaptation (CLDA) algorithm that can rapidly and reliably improve performance regardless of the initial decoding algorithm, which may be particularly useful for clinical applications with paralyzed patients. I will then show that this CLDA can be combined with neural adaptation to achieve and maintain skillful BMI performance across different tasks. Analyses of these data also suggests that brain-decoder interactions might be useful for shaping BMI performance. Finally, I will discuss emerging work exploring the selection of the neural signals for control and how it might influence closed-loop performance.